关于flink:Flink-SQL-FileSystem-Connector-分区提交与自定义小文件合并策略

86次阅读

共计 11016 个字符,预计需要花费 28 分钟才能阅读完成。

Flink SQL 的 FileSystem Connector 为了与 Flink-Hive 集成的大环境适配,做了很多改良,而其中最为显著的就是分区提交(partition commit)机制。

本文先通过源码简略过一下分区提交机制的两个因素——即触发(trigger)和策略(policy)的实现,而后用合并小文件的实例说一下自定义分区提交策略的办法。

PartitionCommitTrigger

在最新的 Flink SQL 中,FileSystem Connector 原生反对数据分区,并且写入时采纳规范 Hive 分区格局,如下所示。

path
└── datetime=2019-08-25
    └── hour=11
        ├── part-0.parquet
        ├── part-1.parquet
    └── hour=12
        ├── part-0.parquet
└── datetime=2019-08-26
    └── hour=6
        ├── part-0.parquet

那么,曾经写入的分区数据何时能力对上游可见呢?这就波及到如何触发分区提交的问题。依据官网文档,触发参数有以下两个:

  • sink.partition-commit.trigger:可选 process-time(依据解决工夫触发)和 partition-time(依据从事件工夫中提取的分区工夫触发)。
  • sink.partition-commit.delay:分区提交的时延。如果 trigger 是 process-time,则以分区创立时的零碎工夫戳为准,通过此时延后提交;如果 trigger 是 partition-time,则以分区创立时自身携带的事件工夫戳为准,当水印工夫戳通过此时延后提交。

可见,process-time trigger 无奈应答处理过程中呈现的抖动,一旦数据早退或者程序失败重启,数据就不能依照事件工夫被纳入正确的分区了。所以在理论利用中,咱们简直总是选用 partition-time trigger,并本人生成水印。当然咱们也须要通过 partition.time-extractor.* 一系列参数来指定抽取分区工夫的规定(PartitionTimeExtractor),官网文档说得很分明,不再赘述。在源码中,PartitionCommitTrigger 的类图如下。

上面以分区工夫触发的 PartitionTimeCommitTrigger 为例,简略看看它的思路。间接上该类的残缺代码。

public class PartitionTimeCommitTigger implements PartitionCommitTrigger {
    private static final ListStateDescriptor<List<String>> PENDING_PARTITIONS_STATE_DESC =
            new ListStateDescriptor<>(
                    "pending-partitions",
                    new ListSerializer<>(StringSerializer.INSTANCE));

    private static final ListStateDescriptor<Map<Long, Long>> WATERMARKS_STATE_DESC =
            new ListStateDescriptor<>(
                    "checkpoint-id-to-watermark",
                    new MapSerializer<>(LongSerializer.INSTANCE, LongSerializer.INSTANCE));

    private final ListState<List<String>> pendingPartitionsState;
    private final Set<String> pendingPartitions;

    private final ListState<Map<Long, Long>> watermarksState;
    private final TreeMap<Long, Long> watermarks;
    private final PartitionTimeExtractor extractor;
    private final long commitDelay;
    private final List<String> partitionKeys;

    public PartitionTimeCommitTigger(
            boolean isRestored,
            OperatorStateStore stateStore,
            Configuration conf,
            ClassLoader cl,
            List<String> partitionKeys) throws Exception {this.pendingPartitionsState = stateStore.getListState(PENDING_PARTITIONS_STATE_DESC);
        this.pendingPartitions = new HashSet<>();
        if (isRestored) {pendingPartitions.addAll(pendingPartitionsState.get().iterator().next());
        }

        this.partitionKeys = partitionKeys;
        this.commitDelay = conf.get(SINK_PARTITION_COMMIT_DELAY).toMillis();
        this.extractor = PartitionTimeExtractor.create(
                cl,
                conf.get(PARTITION_TIME_EXTRACTOR_KIND),
                conf.get(PARTITION_TIME_EXTRACTOR_CLASS),
                conf.get(PARTITION_TIME_EXTRACTOR_TIMESTAMP_PATTERN));

        this.watermarksState = stateStore.getListState(WATERMARKS_STATE_DESC);
        this.watermarks = new TreeMap<>();
        if (isRestored) {watermarks.putAll(watermarksState.get().iterator().next());
        }
    }

    @Override
    public void addPartition(String partition) {if (!StringUtils.isNullOrWhitespaceOnly(partition)) {this.pendingPartitions.add(partition);
        }
    }

    @Override
    public List<String> committablePartitions(long checkpointId) {if (!watermarks.containsKey(checkpointId)) {
            throw new IllegalArgumentException(String.format("Checkpoint(%d) has not been snapshot. The watermark information is: %s.",
                    checkpointId, watermarks));
        }

        long watermark = watermarks.get(checkpointId);
        watermarks.headMap(checkpointId, true).clear();

        List<String> needCommit = new ArrayList<>();
        Iterator<String> iter = pendingPartitions.iterator();
        while (iter.hasNext()) {String partition = iter.next();
            LocalDateTime partTime = extractor.extract(partitionKeys, extractPartitionValues(new Path(partition)));
            if (watermark > toMills(partTime) + commitDelay) {needCommit.add(partition);
                iter.remove();}
        }
        return needCommit;
    }

    @Override
    public void snapshotState(long checkpointId, long watermark) throws Exception {pendingPartitionsState.clear();
        pendingPartitionsState.add(new ArrayList<>(pendingPartitions));

        watermarks.put(checkpointId, watermark);
        watermarksState.clear();
        watermarksState.add(new HashMap<>(watermarks));
    }

    @Override
    public List<String> endInput() {ArrayList<String> partitions = new ArrayList<>(pendingPartitions);
        pendingPartitions.clear();
        return partitions;
    }
}

留神到该类中保护了两对必要的信息:

  • pendingPartitions/pendingPartitionsState:期待提交的分区以及对应的状态;
  • watermarks/watermarksState:< 检查点 ID, 水印工夫戳 > 的映射关系(用 TreeMap 存储以保障有序)以及对应的状态。

这也阐明开启检查点是分区提交机制的前提。snapshotState() 办法用于将这些信息保留到状态中。这样在程序 failover 时,也可能保障分区数据的残缺和正确。那么 PartitionTimeCommitTigger 是如何晓得该提交哪些分区的呢?来看 committablePartitions() 办法:

  1. 查看 checkpoint ID 是否非法;
  2. 取出以后 checkpoint ID 对应的水印,并调用 TreeMap 的 headMap() 和 clear() 办法删掉早于以后 checkpoint ID 的水印数据(没用了);
  3. 遍历期待提交的分区,调用之前定义的 PartitionTimeExtractor(比方 ${year}-${month}-${day} ${hour}:00:00)抽取分区工夫。如果水印工夫曾经超过了分区工夫加上上述 sink.partition-commit.delay 参数,阐明能够提交,并返回它们。

PartitionCommitTrigger 的逻辑会在负责真正提交分区的 StreamingFileCommitter 组件中用到(留神 StreamingFileCommitter 的并行度固定为 1,之前有人问过这件事)。StreamingFileCommitter 和 StreamingFileWriter(即 SQL 版 StreamingFileSink)的细节绝对比较复杂,本文不表,之后会具体阐明。

PartitionCommitPolicy

PartitionCommitTrigger 解决了分区何时对上游可见的问题,而 PartitionCommitPolicy 解决的是对上游可见的标记问题。依据官网文档,咱们能够通过 sink.partition-commit.policy.kind 参数进行配置,一共有三种提交策略(能够组合应用):

  • metastore:向 Hive Metastore 更新分区信息(仅在应用 HiveCatalog 时无效);
  • success-file:向分区目录下写一个示意胜利的文件,文件名能够通过 sink.partition-commit.success-file.name 参数自定义,默认为_SUCCESS;
  • custom:自定义的提交策略,须要通过 sink.partition-commit.policy.class 参数来指定策略的类名。

PartitionCommitPolicy 的外部实现就简略多了,类图如下。策略的具体逻辑通过覆写 commit() 办法实现。

两个默认实现 MetastoreCommitPolicy 和 SuccessFileCommitPolicy 如下,都非常容易了解。

public class MetastoreCommitPolicy implements PartitionCommitPolicy {private static final Logger LOG = LoggerFactory.getLogger(MetastoreCommitPolicy.class);

    private TableMetaStore metaStore;

    public void setMetastore(TableMetaStore metaStore) {this.metaStore = metaStore;}

    @Override
    public void commit(Context context) throws Exception {LinkedHashMap<String, String> partitionSpec = context.partitionSpec();
        metaStore.createOrAlterPartition(partitionSpec, context.partitionPath());
        LOG.info("Committed partition {} to metastore", partitionSpec);
    }
}
public class SuccessFileCommitPolicy implements PartitionCommitPolicy {private static final Logger LOG = LoggerFactory.getLogger(SuccessFileCommitPolicy.class);

    private final String fileName;
    private final FileSystem fileSystem;

    public SuccessFileCommitPolicy(String fileName, FileSystem fileSystem) {
        this.fileName = fileName;
        this.fileSystem = fileSystem;
    }

    @Override
    public void commit(Context context) throws Exception {
        fileSystem.create(new Path(context.partitionPath(), fileName),
                FileSystem.WriteMode.OVERWRITE).close();
        LOG.info("Committed partition {} with success file", context.partitionSpec());
    }
}

Customize PartitionCommitPolicy

还记得之前做过的 Hive Streaming 试验么?

由上图可见,在写入比拟频繁或者并行度比拟大时,每个分区内都会呈现很多细碎的小文件,这是咱们不乐意看到的。上面尝试自定义 PartitionCommitPolicy,实现在分区提交时将它们顺便合并在一起(存储格局为 Parquet)。

Parquet 格局与一般的 TextFile 等行存储格局不同,它是自描述(自带 schema 和 metadata)的列存储,数据结构依照 Google Dremel 的规范格局来组织,与 Protobuf 雷同。所以,咱们应该先检测写入文件的 schema,再依照 schema 别离读取它们,并拼合在一起。上面贴出合并分区内所有小文件的残缺策略 ParquetFileMergingCommitPolicy。为了保障依赖不抵触,Parquet 相干的组件全副采纳 Flink shade 过的版本。窃以为代码写得还算工整易懂,所以偷懒不写正文了。

package me.lmagics.flinkexp.hiveintegration.util;

import org.apache.flink.hive.shaded.parquet.example.data.Group;
import org.apache.flink.hive.shaded.parquet.hadoop.ParquetFileReader;
import org.apache.flink.hive.shaded.parquet.hadoop.ParquetFileWriter.Mode;
import org.apache.flink.hive.shaded.parquet.hadoop.ParquetReader;
import org.apache.flink.hive.shaded.parquet.hadoop.ParquetWriter;
import org.apache.flink.hive.shaded.parquet.hadoop.example.ExampleParquetWriter;
import org.apache.flink.hive.shaded.parquet.hadoop.example.GroupReadSupport;
import org.apache.flink.hive.shaded.parquet.hadoop.metadata.CompressionCodecName;
import org.apache.flink.hive.shaded.parquet.hadoop.metadata.ParquetMetadata;
import org.apache.flink.hive.shaded.parquet.hadoop.util.HadoopInputFile;
import org.apache.flink.hive.shaded.parquet.schema.MessageType;
import org.apache.flink.table.filesystem.PartitionCommitPolicy;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class ParquetFileMergingCommitPolicy implements PartitionCommitPolicy {private static final Logger LOGGER = LoggerFactory.getLogger(ParquetFileMergingCommitPolicy.class);

  @Override
  public void commit(Context context) throws Exception {Configuration conf = new Configuration();
    FileSystem fs = FileSystem.get(conf);
    String partitionPath = context.partitionPath().getPath();

    List<Path> files = listAllFiles(fs, new Path(partitionPath), "part-");
    LOGGER.info("{} files in path {}", files.size(), partitionPath);

    MessageType schema = getParquetSchema(files, conf);
    if (schema == null) {return;}
    LOGGER.info("Fetched parquet schema: {}", schema.toString());

    Path result = merge(partitionPath, schema, files, fs);
    LOGGER.info("Files merged into {}", result.toString());
  }

  private List<Path> listAllFiles(FileSystem fs, Path dir, String prefix) throws IOException {List<Path> result = new ArrayList<>();

    RemoteIterator<LocatedFileStatus> dirIterator = fs.listFiles(dir, false);
    while (dirIterator.hasNext()) {LocatedFileStatus fileStatus = dirIterator.next();
      Path filePath = fileStatus.getPath();
      if (fileStatus.isFile() && filePath.getName().startsWith(prefix)) {result.add(filePath);
      }
    }

    return result;
  }

  private MessageType getParquetSchema(List<Path> files, Configuration conf) throws IOException {if (files.size() == 0) {return null;}

    HadoopInputFile inputFile = HadoopInputFile.fromPath(files.get(0), conf);
    ParquetFileReader reader = ParquetFileReader.open(inputFile);
    ParquetMetadata metadata = reader.getFooter();
    MessageType schema = metadata.getFileMetaData().getSchema();

    reader.close();
    return schema;
  }

  private Path merge(String partitionPath, MessageType schema, List<Path> files, FileSystem fs) throws IOException {Path mergeDest = new Path(partitionPath + "/result-" + System.currentTimeMillis() + ".parquet");
    ParquetWriter<Group> writer = ExampleParquetWriter.builder(mergeDest)
      .withType(schema)
      .withConf(fs.getConf())
      .withWriteMode(Mode.CREATE)
      .withCompressionCodec(CompressionCodecName.SNAPPY)
      .build();

    for (Path file : files) {ParquetReader<Group> reader = ParquetReader.builder(new GroupReadSupport(), file)
        .withConf(fs.getConf())
        .build();
      Group data;
      while((data = reader.read()) != null) {writer.write(data);
      }
      reader.close();}
    writer.close();

    for (Path file : files) {fs.delete(file, false);
    }

    return mergeDest;
  }
}

别忘了批改分区提交策略相干的参数:

'sink.partition-commit.policy.kind' = 'metastore,success-file,custom', 
'sink.partition-commit.policy.class' = 'me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy'

从新跑一遍之前的 Hive Streaming 程序,察看日志输入:

20-08-04 22:15:00 INFO  me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy       - 14 files in path /user/hive/warehouse/hive_tmp.db/analytics_access_log_hive/ts_date=2020-08-04/ts_hour=22/ts_minute=13

// 如果看官相熟 Protobuf 的话,能够发现这里的 schema 格调是完全一致的
20-08-04 22:15:00 INFO  me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy       - Fetched parquet schema: 
message hive_schema {
  optional int64 ts;
  optional int64 user_id;
  optional binary event_type (UTF8);
  optional binary from_type (UTF8);
  optional binary column_type (UTF8);
  optional int64 site_id;
  optional int64 groupon_id;
  optional int64 partner_id;
  optional int64 merchandise_id;
}

20-08-04 22:15:04 INFO  me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy       - Files merged into /user/hive/warehouse/hive_tmp.db/analytics_access_log_hive/ts_date=2020-08-04/ts_hour=22/ts_minute=13/result-1596550500950.parquet

最初来验证一下,合并胜利。

以上。感兴趣的同学也能够入手测试~
原文链接:https://www.jianshu.com/p/fb7…

正文完
 0