Configurations

Configuration #

CoreOptions #

Core options for paimon.

Key Default Type Description
auto-create
false Boolean Whether to create underlying storage when reading and writing the table.
bucket
1 Integer Bucket number for file store.
bucket-key
(none) String Specify the paimon distribution policy. Data is assigned to each bucket according to the hash value of bucket-key.
If you specify multiple fields, delimiter is ','.
If not specified, the primary key will be used; if there is no primary key, the full row will be used.
changelog-producer
none

Enum

Whether to double write to a changelog file. This changelog file keeps the details of data changes, it can be read directly during stream reads.

Possible values:
  • "none": No changelog file.
  • "input": Double write to a changelog file when flushing memory table, the changelog is from input.
  • "full-compaction": Generate changelog files with each full compaction.
  • "lookup": Generate changelog files through 'lookup' before committing the data writing.
commit.force-compact
false Boolean Whether to force a compaction before commit.
compaction.max-size-amplification-percent
200 Integer The size amplification is defined as the amount (in percentage) of additional storage needed to store a single byte of data in the merge tree for changelog mode table.
compaction.max-sorted-run-num
2147483647 Integer The maximum sorted run number to pick for compaction. This value avoids merging too much sorted runs at the same time during compaction, which may lead to OutOfMemoryError.
compaction.max.file-num
50 Integer For file set [f_0,...,f_N], the maximum file number to trigger a compaction for append-only table, even if sum(size(f_i)) < targetFileSize. This value avoids pending too much small files, which slows down the performance.
compaction.min.file-num
5 Integer For file set [f_0,...,f_N], the minimum file number which satisfies sum(size(f_i)) >= targetFileSize to trigger a compaction for append-only table. This value avoids almost-full-file to be compacted, which is not cost-effective.
compaction.size-ratio
1 Integer Percentage flexibility while comparing sorted run size for changelog mode table. If the candidate sorted run(s) size is 1% smaller than the next sorted run's size, then include next sorted run into this candidate set.
consumer-id
(none) String Consumer id for recording the offset of consumption in the storage.
continuous.discovery-interval
10 s Duration The discovery interval of continuous reading.
dynamic-partition-overwrite
true Boolean Whether only overwrite dynamic partition when overwriting a partitioned table with dynamic partition columns. Works only when the table has partition keys.
file.compression
(none) String Default file compression format, can be overridden by file.compression.per.level
file.compression.per.level
Map Define different compression policies for different level, you can add the conf like this: 'file.compression.per.level' = '0:lz4,1:zlib', for orc file format, the compression value could be NONE, ZLIB, SNAPPY, LZO, LZ4, for parquet file format, the compression value could be UNCOMPRESSED, SNAPPY, GZIP, LZO, BROTLI, LZ4, ZSTD.
file.format
orc

Enum

Specify the message format of data files, currently orc, parquet and avro are supported.

Possible values:
  • "orc": ORC file format.
  • "parquet": Parquet file format.
  • "avro": Avro file format.
full-compaction.delta-commits
(none) Integer Full compaction will be constantly triggered after delta commits.
local-sort.max-num-file-handles
128 Integer The maximal fan-in for external merge sort. It limits the number of file handles. If it is too small, may cause intermediate merging. But if it is too large, it will cause too many files opened at the same time, consume memory and lead to random reading.
log.changelog-mode
auto

Enum

Specify the log changelog mode for table.

Possible values:
  • "auto": Upsert for table with primary key, all for table without primary key.
  • "all": The log system stores all changes including UPDATE_BEFORE.
  • "upsert": The log system does not store the UPDATE_BEFORE changes, the log consumed job will automatically add the normalized node, relying on the state to generate the required update_before.
log.consistency
transactional

Enum

Specify the log consistency mode for table.

Possible values:
  • "transactional": Only the data after the checkpoint can be seen by readers, the latency depends on checkpoint interval.
  • "eventual": Immediate data visibility, you may see some intermediate states, but eventually the right results will be produced, only works for table with primary key.
log.format
"debezium-json" String Specify the message format of log system.
log.key.format
"json" String Specify the key message format of log system with primary key.
log.scan.remove-normalize
false Boolean Whether to force the removal of the normalize node when streaming read. Note: This is dangerous and is likely to cause data errors if downstream is used to calculate aggregation and the input is not complete changelog.
lookup.cache-file-retention
1 h Duration The cached files retention time for lookup. After the file expires, if there is a need for access, it will be re-read from the DFS to build an index on the local disk.
lookup.cache-max-disk-size
9223372036854775807 bytes MemorySize Max disk size for lookup cache, you can use this option to limit the use of local disks.
lookup.cache-max-memory-size
256 mb MemorySize Max memory size for lookup cache.
lookup.hash-load-factor
0.75 Float The index load factor for lookup.
manifest.format
avro

Enum

Specify the message format of manifest files.

Possible values:
  • "orc": ORC file format.
  • "parquet": Parquet file format.
  • "avro": Avro file format.
manifest.merge-min-count
30 Integer To avoid frequent manifest merges, this parameter specifies the minimum number of ManifestFileMeta to merge.
manifest.target-file-size
8 mb MemorySize Suggested file size of a manifest file.
merge-engine
deduplicate

Enum

Specify the merge engine for table with primary key.

Possible values:
  • "deduplicate": De-duplicate and keep the last row.
  • "partial-update": Partial update non-null fields.
  • "aggregation": Aggregate fields with same primary key.
num-levels
(none) Integer Total level number, for example, there are 3 levels, including 0,1,2 levels.
num-sorted-run.compaction-trigger
5 Integer The sorted run number to trigger compaction. Includes level0 files (one file one sorted run) and high-level runs (one level one sorted run).
num-sorted-run.stop-trigger
(none) Integer The number of sorted runs that trigger the stopping of writes, the default value is 'num-sorted-run.compaction-trigger' + 1.
orc.bloom.filter.columns
(none) String A comma-separated list of columns for which to create a bloom filter when writing.
orc.bloom.filter.fpp
0.05 Double Define the default false positive probability for bloom filters.
page-size
64 kb MemorySize Memory page size.
partial-update.ignore-delete
false Boolean Whether to ignore delete records in partial-update mode.
partition
(none) String Define partition by table options, cannot define partition on DDL and table options at the same time.
partition.default-name
"__DEFAULT_PARTITION__" String The default partition name in case the dynamic partition column value is null/empty string.
partition.expiration-check-interval
1 h Duration The check interval of partition expiration.
partition.expiration-time
(none) Duration The expiration interval of a partition. A partition will be expired if it‘s lifetime is over this value. Partition time is extracted from the partition value.
partition.timestamp-formatter
(none) String The formatter to format timestamp from string. It can be used with 'partition.timestamp-pattern' to create a formatter using the specified value.
  • Default formatter is 'yyyy-MM-dd HH:mm:ss' and 'yyyy-MM-dd'.
  • Supports multiple partition fields like '$year-$month-$day $hour:00:00'.
  • The timestamp-formatter is compatible with Java's DateTimeFormatter.
partition.timestamp-pattern
(none) String You can specify a pattern to get a timestamp from partitions. The formatter pattern is defined by 'partition.timestamp-formatter'.
  • By default, read from the first field.
  • If the timestamp in the partition is a single field called 'dt', you can use '$dt'.
  • If it is spread across multiple fields for year, month, day, and hour, you can use '$year-$month-$day $hour:00:00'.
  • If the timestamp is in fields dt and hour, you can use '$dt $hour:00:00'.
primary-key
(none) String Define primary key by table options, cannot define primary key on DDL and table options at the same time.
read.batch-size
1024 Integer Read batch size for orc and parquet.
scan.bounded.watermark
(none) Long End condition "watermark" for bounded streaming mode. Stream reading will end when a larger watermark snapshot is encountered.
scan.mode
default

Enum

Specify the scanning behavior of the source.

Possible values:
  • "default": Determines actual startup mode according to other table properties. If "scan.timestamp-millis" is set the actual startup mode will be "from-timestamp", and if "scan.snapshot-id" is set the actual startup mode will be "from-snapshot". Otherwise the actual startup mode will be "latest-full".
  • "latest-full": For streaming sources, produces the latest snapshot on the table upon first startup, and continue to read the latest changes. For batch sources, just produce the latest snapshot but does not read new changes.
  • "full": Deprecated. Same as "latest-full".
  • "latest": For streaming sources, continuously reads latest changes without producing a snapshot at the beginning. For batch sources, behaves the same as the "latest-full" startup mode.
  • "compacted-full": For streaming sources, produces a snapshot after the latest compaction on the table upon first startup, and continue to read the latest changes. For batch sources, just produce a snapshot after the latest compaction but does not read new changes. Snapshots of full compaction are picked when scheduled full-compaction is enabled.
  • "from-timestamp": For streaming sources, continuously reads changes starting from timestamp specified by "scan.timestamp-millis", without producing a snapshot at the beginning. For batch sources, produces a snapshot at timestamp specified by "scan.timestamp-millis" but does not read new changes.
  • "from-snapshot": For streaming sources, continuously reads changes starting from snapshot specified by "scan.snapshot-id", without producing a snapshot at the beginning. For batch sources, produces a snapshot specified by "scan.snapshot-id" but does not read new changes.
  • "from-snapshot-full": For streaming sources, produces from snapshot specified by "scan.snapshot-id" on the table upon first startup, and continuously reads changes. For batch sources, produces a snapshot specified by "scan.snapshot-id" but does not read new changes.
scan.plan-sort-partition
false Boolean Whether to sort plan files by partition fields, this allows you to read according to the partition order, even if your partition writes are out of order.
It is recommended that you use this for streaming read of the 'append-only' table. By default, streaming read will read the full snapshot first. In order to avoid the disorder reading for partitions, you can open this option.
scan.snapshot-id
(none) Long Optional snapshot id used in case of "from-snapshot" or "from-snapshot-full" scan mode
scan.timestamp-millis
(none) Long Optional timestamp used in case of "from-timestamp" scan mode.
sequence.field
(none) String The field that generates the sequence number for primary key table, the sequence number determines which data is the most recent.
snapshot.num-retained.max
2147483647 Integer The maximum number of completed snapshots to retain.
snapshot.num-retained.min
10 Integer The minimum number of completed snapshots to retain.
snapshot.time-retained
1 h Duration The maximum time of completed snapshots to retain.
source.split.open-file-cost
4 mb MemorySize Open file cost of a source file. It is used to avoid reading too many files with a source split, which can be very slow.
source.split.target-size
128 mb MemorySize Target size of a source split when scanning a bucket.
streaming-read-mode
(none)

Enum

The mode of streaming read that specifies to read the data of table file or log

Possible values:
  • file: Reads from the data of table file store.
  • log: Read from the data of table log store.


Possible values:
  • "log": Reads from the log store.
  • "file": Reads from the file store.
streaming-read-overwrite
false Boolean Whether to read the changes from overwrite in streaming mode.
target-file-size
128 mb MemorySize Target size of a file.
write-buffer-size
256 mb MemorySize Amount of data to build up in memory before converting to a sorted on-disk file.
write-buffer-spillable
(none) Boolean Whether the write buffer can be spillable. Enabled by default when using object storage.
write-manifest-cache
0 bytes MemorySize Cache size for reading manifest files for write initialization.
write-mode
auto

Enum

Specify the write mode for table.

Possible values:
  • "auto": The change-log for table with primary key, append-only for table without primary key.
  • "append-only": The table can only accept append-only insert operations. Neither data deduplication nor any primary key constraints will be done when inserting rows into paimon.
  • "change-log": The table can accept insert/delete/update operations.
write-only
false Boolean If set to true, compactions and snapshot expiration will be skipped. This option is used along with dedicated compact jobs.

CatalogOptions #

Options for paimon catalog.

Key Default Type Description
fs.allow-hadoop-fallback
true Boolean Allow to fallback to hadoop File IO when no file io found for the scheme.
lock-acquire-timeout
8 min Duration The maximum time to wait for acquiring the lock.
lock-check-max-sleep
8 s Duration The maximum sleep time when retrying to check the lock.
lock.enabled
false Boolean Enable Catalog Lock.
metastore
"filesystem" String Metastore of paimon catalog, supports filesystem and hive.
table.type
managed

Enum

Type of table.

Possible values:
  • "managed": Paimon owned table where the entire lifecycle of the table data is managed.
  • "external": The table where Paimon has loose coupling with the data stored in external locations.
uri
(none) String Uri of metastore server.
warehouse
(none) String The warehouse root path of catalog.

HiveCatalogOptions #

Options for Hive catalog.

Key Default Type Description
hadoop-conf-dir
(none) String File directory of the core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml. Currently, only local file system paths are supported.
hive-conf-dir
(none) String File directory of the hive-site.xml , used to create HiveMetastoreClient and security authentication, such as Kerberos, LDAP, Ranger and so on

FlinkConnectorOptions #

Flink connector options for paimon.

Key Default Type Description
changelog-producer.lookup-wait
true Boolean When changelog-producer is set to LOOKUP, commit will wait for changelog generation by lookup.
log.system
"none" String The log system used to keep changes of the table.

Possible values:
  • "none": No log system, the data is written only to file store, and the streaming read will be directly read from the file store.
  • "kafka": Kafka log system, the data is double written to file store and kafka, and the streaming read will be read from kafka. If streaming read from file, configures streaming-read-mode to file.
scan.infer-parallelism
false Boolean If it is false, parallelism of source are set by scan.parallelism. Otherwise, source parallelism is inferred from splits number (batch mode) or bucket number(streaming mode).
scan.parallelism
(none) Integer Define a custom parallelism for the scan source. By default, if this option is not defined, the planner will derive the parallelism for each statement individually by also considering the global configuration. If user enable the scan.infer-parallelism, the planner will derive the parallelism by inferred parallelism.
scan.split-enumerator.batch-size
10 Integer How many splits should assign to subtask per batch in StaticFileStoreSplitEnumerator to avoid exceed `akka.framesize` limit.
scan.watermark.alignment.group
(none) String A group of sources to align watermarks.
scan.watermark.alignment.max-drift
(none) Duration Maximal drift to align watermarks, before we pause consuming from the source/task/partition.
scan.watermark.alignment.update-interval
1 s Duration How often tasks should notify coordinator about the current watermark and how often the coordinator should announce the maximal aligned watermark.
scan.watermark.emit.strategy
on-event

Enum

Emit strategy for watermark generation.

Possible values:
  • "on-periodic": Emit watermark periodically, interval is controlled by Flink 'pipeline.auto-watermark-interval'.
  • "on-event": Emit watermark per record.
scan.watermark.idle-timeout
(none) Duration If no records flow in a partition of a stream for that amount of time, then that partition is considered "idle" and will not hold back the progress of watermarks in downstream operators.
sink.parallelism
(none) Integer Defines a custom parallelism for the sink. By default, if this option is not defined, the planner will derive the parallelism for each statement individually by also considering the global configuration.
streaming-read-atomic
false Boolean The option to enable return per iterator instead of per record in streaming read.This can ensure that there will be no checkpoint segmentation in iterator consumption.
By default, streaming source checkpoint will be performed in any time, this means 'UPDATE_BEFORE' and 'UPDATE_AFTER' can be split into two checkpoint. Downstream can see intermediate state.