SQL Query

SQL Query #

Just like all other tables, Paimon tables can be queried with SELECT statement.

Batch Query #

Paimon’s batch read returns all the data in a snapshot of the table. By default, batch reads return the latest snapshot.

-- Flink SQL
SET 'execution.runtime-mode' = 'batch';

Batch Time Travel #

Paimon batch reads with time travel can specify a snapshot or a tag and read the corresponding data.

-- read the snapshot with id 1L
SELECT * FROM t /*+ OPTIONS('scan.snapshot-id' = '1') */;

-- read the snapshot from specified timestamp in unix milliseconds
SELECT * FROM t /*+ OPTIONS('scan.timestamp-millis' = '1678883047356') */;

-- read the snapshot from specified timestamp string ,it will be automatically converted to timestamp in unix milliseconds
-- Supported formats include:yyyy-MM-dd, yyyy-MM-dd HH:mm:ss, yyyy-MM-dd HH:mm:ss.SSS, use default local time zone
SELECT * FROM t /*+ OPTIONS('scan.timestamp' = '2023-12-09 23:09:12') */;

-- read tag 'my-tag'
SELECT * FROM t /*+ OPTIONS('scan.tag-name' = 'my-tag') */;

-- read the snapshot from watermark, will match the first snapshot after the watermark
SELECT * FROM t /*+ OPTIONS('scan.watermark' = '1678883047356') */; 

Flink SQL supports time travel syntax after 1.18.

-- read the snapshot from specified timestamp
SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00';

-- you can also use some simple expressions (see flink document to get supported functions)
SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00' + INTERVAL '1' DAY

Batch Incremental #

Read incremental changes between start snapshot (exclusive) and end snapshot.

For example:

  • ‘5,10’ means changes between snapshot 5 and snapshot 10.
  • ‘TAG1,TAG3’ means changes between TAG1 and TAG3.
-- incremental between snapshot ids
SELECT * FROM t /*+ OPTIONS('incremental-between' = '12,20') */;

-- incremental between snapshot time mills
SELECT * FROM t /*+ OPTIONS('incremental-between-timestamp' = '1692169000000,1692169900000') */;

By default, will scan changelog files for the table which produces changelog files. Otherwise, scan newly changed files. You can also force specifying 'incremental-between-scan-mode'.

In Batch SQL, the DELETE records are not allowed to be returned, so records of -D will be dropped. If you want see DELETE records, you can use audit_log table:

SELECT * FROM t$audit_log /*+ OPTIONS('incremental-between' = '12,20') */;

Streaming Query #

By default, Streaming read produces the latest snapshot on the table upon first startup, and continue to read the latest changes.

Paimon by default ensures that your startup is properly processed with all data included.

Paimon Source in Streaming mode is unbounded, like a queue that never ends.
-- Flink SQL
SET 'execution.runtime-mode' = 'streaming';

You can also do streaming read without the snapshot data, you can use latest scan mode:

-- Continuously reads latest changes without producing a snapshot at the beginning.
SELECT * FROM t /*+ OPTIONS('scan.mode' = 'latest') */;

Streaming Time Travel #

If you only want to process data for today and beyond, you can do so with partitioned filters:

SELECT * FROM t WHERE dt > '2023-06-26';

If it’s not a partitioned table, or you can’t filter by partition, you can use Time travel’s stream read.

-- read changes from snapshot id 1L 
SELECT * FROM t /*+ OPTIONS('scan.snapshot-id' = '1') */;

-- read changes from snapshot specified timestamp
SELECT * FROM t /*+ OPTIONS('scan.timestamp-millis' = '1678883047356') */;

-- read snapshot id 1L upon first startup, and continue to read the changes
SELECT * FROM t /*+ OPTIONS('scan.mode'='from-snapshot-full','scan.snapshot-id' = '1') */;

Flink SQL supports time travel syntax after 1.18.

-- read the snapshot from specified timestamp
SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00';

-- you can also use some simple expressions (see flink document to get supported functions)
SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP '2023-01-01 00:00:00' + INTERVAL '1' DAY

Time travel’s stream read rely on snapshots, but by default, snapshot only retains data within 1 hour, which can prevent you from reading older incremental data. So, Paimon also provides another mode for streaming reads, scan.file-creation-time-millis, which provides a rough filtering to retain files generated after timeMillis.

SELECT * FROM t /*+ OPTIONS('scan.file-creation-time-millis' = '1678883047356') */;

Consumer ID #

You can specify the consumer-id when streaming read table:

SELECT * FROM t /*+ OPTIONS('consumer-id' = 'myid', 'consumer.expiration-time' = '1 d', 'consumer.mode' = 'at-least-once') */;

When stream read Paimon tables, the next snapshot id to be recorded into the file system. This has several advantages:

  1. When previous job is stopped, the newly started job can continue to consume from the previous progress without resuming from the state. The newly reading will start reading from next snapshot id found in consumer files. If you don’t want this behavior, you can set 'consumer.ignore-progress' to true.
  2. When deciding whether a snapshot has expired, Paimon looks at all the consumers of the table in the file system, and if there are consumers that still depend on this snapshot, then this snapshot will not be deleted by expiration.

NOTE 1: The consumer will prevent expiration of the snapshot. You can specify 'consumer.expiration-time' to manage the lifetime of consumers.

NOTE 2: If you don’t want to affect the checkpoint time, you need to configure 'consumer.mode' = 'at-least-once'. This mode allow readers consume snapshots at different rates and record the slowest snapshot-id among all readers into the consumer. This mode can provide more capabilities, such as watermark alignment.

NOTE 3: About 'consumer.mode', since the implementation of exactly-once mode and at-least-once mode are completely different, the state of flink is incompatible and cannot be restored from the state when switching modes.

You can reset a consumer with a given consumer ID and next snapshot ID and delete a consumer with a given consumer ID. First, you need to stop the streaming task using this consumer ID, and then execute the reset consumer action job.

Run the following command:

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.9.0.jar \
    reset-consumer \
    --warehouse <warehouse-path> \
    --database <database-name> \ 
    --table <table-name> \
    --consumer_id <consumer-id> \
    [--next_snapshot <next-snapshot-id>] \
    [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]

please don’t specify –next_snapshot parameter if you want to delete the consumer.

Read Overwrite #

Streaming reading will ignore the commits generated by INSERT OVERWRITE by default. If you want to read the commits of OVERWRITE, you can configure streaming-read-overwrite.

Read Parallelism #

By default, the parallelism of batch reads is the same as the number of splits, while the parallelism of stream reads is the same as the number of buckets, but not greater than scan.infer-parallelism.max.

Disable scan.infer-parallelism, global parallelism will be used for reads.

You can also manually specify the parallelism from scan.parallelism.

Key Default Type Description
scan.infer-parallelism
true Boolean If it is false, parallelism of source are set by global parallelism. Otherwise, source parallelism is inferred from splits number (batch mode) or bucket number(streaming mode).
scan.infer-parallelism.max
1024 Integer If scan.infer-parallelism is true, limit the parallelism of source through this option.
scan.parallelism
(none) Integer Define a custom parallelism for the scan source. By default, if this option is not defined, the planner will derive the parallelism for each statement individually by also considering the global configuration. If user enable the scan.infer-parallelism, the planner will derive the parallelism by inferred parallelism.

Query Optimization #

Batch Streaming

It is highly recommended to specify partition and primary key filters along with the query, which will speed up the data skipping of the query.

The filter functions that can accelerate data skipping are:

  • =
  • <
  • <=
  • >
  • >=
  • IN (...)
  • LIKE 'abc%'
  • IS NULL

Paimon will sort the data by primary key, which speeds up the point queries and range queries. When using a composite primary key, it is best for the query filters to form a leftmost prefix of the primary key for good acceleration.

Suppose that a table has the following specification:

CREATE TABLE orders (
    catalog_id BIGINT,
    order_id BIGINT,
    .....,
    PRIMARY KEY (catalog_id, order_id) NOT ENFORCED -- composite primary key
);

The query obtains a good acceleration by specifying a range filter for the leftmost prefix of the primary key.

SELECT * FROM orders WHERE catalog_id=1025;

SELECT * FROM orders WHERE catalog_id=1025 AND order_id=29495;

SELECT * FROM orders
  WHERE catalog_id=1025
  AND order_id>2035 AND order_id<6000;

However, the following filter cannot accelerate the query well.

SELECT * FROM orders WHERE order_id=29495;

SELECT * FROM orders WHERE catalog_id=1025 OR order_id=29495;
Edit This Page
Copyright © 2024 The Apache Software Foundation. Apache Paimon, Paimon, and its feather logo are trademarks of The Apache Software Foundation.