OSS

OSS #

Download paimon-oss-0.4.0-incubating.jar.
If you have already configured oss access through Flink (Via Flink FileSystem), here you can skip the following configuration.

Put paimon-oss-0.4.0-incubating.jar into lib directory of your Flink home, and create catalog:

CREATE CATALOG my_catalog WITH (
    'type' = 'paimon',
    'warehouse' = 'oss://path/to/warehouse',
    'fs.oss.endpoint' = 'oss-cn-hangzhou.aliyuncs.com',
    'fs.oss.accessKeyId' = 'xxx',
    'fs.oss.accessKeySecret' = 'yyy'
);
If you have already configured oss access through Spark (Via Hadoop FileSystem), here you can skip the following configuration.

Place paimon-oss-0.4.0-incubating.jar together with paimon-spark-0.4.0-incubating.jar under Spark’s jars directory, and start like

spark-sql \ 
  --conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
  --conf spark.sql.catalog.paimon.warehouse=oss://<bucket-name>/ \
  --conf spark.sql.catalog.paimon.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com \
  --conf spark.sql.catalog.paimon.fs.oss.accessKeyId=xxx \
  --conf spark.sql.catalog.paimon.fs.oss.accessKeySecret=yyy
If you have already configured oss access through Hive (Via Hadoop FileSystem), here you can skip the following configuration.

NOTE: You need to ensure that Hive metastore can access oss.

Place paimon-oss-0.4.0-incubating.jar together with paimon-hive-connector-0.4.0-incubating.jar under Hive’s auxlib directory, and start like

SET paimon.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com;
SET paimon.fs.oss.accessKeyId=xxx;
SET paimon.fs.oss.accessKeySecret=yyy;

And read table from hive metastore, table can be created by Flink or Spark, see Catalog with Hive Metastore

SELECT * FROM test_table;
SELECT COUNT(1) FROM test_table;

Place paimon-oss-0.4.0-incubating.jar together with paimon-trino-0.4.0-incubating.jar under plugin/paimon directory.

Add options in etc/catalog/paimon.properties.

fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com
fs.oss.accessKeyId=xxx
fs.oss.accessKeySecret=yyy