This documentation is for an unreleased version of Apache Paimon. We recommend you use the latest stable version.
Filesystems #
Apache Paimon utilizes the same pluggable file systems as Apache Flink. Users can follow the standard plugin mechanism to configure the plugin structure if using Flink as compute engine. However, for other engines like Spark or Hive, the provided opt jars (by Flink) may get conflicts and cannot be used directly. It is not convenient for users to fix class conflicts, thus Paimon provides the self-contained and engine-unified FileSystem pluggable jars for user to query tables from Spark/Hive side.
Supported FileSystems #
FileSystem | URI Scheme | Pluggable | Description |
---|---|---|---|
Local File System | file:// | N | Built-in Support |
HDFS | hdfs:// | N | Built-in Support, ensure that the cluster is in the hadoop environment |
Aliyun OSS | oss:// | Y | |
S3 | s3:// | Y |
Dependency #
We recommend you to download the jar directly: Download Link.
You can also manually build bundled jar from the source code.
To build from source code, clone the git repository.
Build shaded jar with the following command.
mvn clean install -DskipTests
You can find the shaded jars under
./paimon-filesystems/paimon-${fs}/target/paimon-${fs}-1.0-SNAPSHOT.jar
.
HDFS #
You don’t need any additional dependencies to access HDFS because you have already taken care of the Hadoop dependencies.
HDFS Configuration #
For HDFS, the most important thing is to be able to read your HDFS configuration.
You may not have to do anything, if you are in a hadoop environment. Otherwise pick one of the following ways to configure your HDFS:
- Set environment variable
HADOOP_HOME
orHADOOP_CONF_DIR
. - Configure
'hadoop-conf-dir'
in the paimon catalog. - Configure Hadoop options through prefix
'hadoop.'
in the paimon catalog.
The first approach is recommended.
If you do not want to include the value of the environment variable, you can configure hadoop-conf-loader
to option
.
Hadoop-compatible file systems (HCFS) #
All Hadoop file systems are automatically available when the Hadoop libraries are on the classpath.
This way, Paimon seamlessly supports all of Hadoop file systems implementing the org.apache.hadoop.fs.FileSystem
interface, and all Hadoop-compatible file systems (HCFS).
- HDFS
- Alluxio (see configuration specifics below)
- XtreemFS
- …
The Hadoop configuration has to have an entry for the required file system implementation in the core-site.xml
file.
For Alluxio support add the following entry into the core-site.xml file:
<property>
<name>fs.alluxio.impl</name>
<value>alluxio.hadoop.FileSystem</value>
</property>
Kerberos #
Configure the following three options in your catalog configuration:
- security.kerberos.login.keytab: Absolute path to a Kerberos keytab file that contains the user credentials. Please make sure it is copied to each machine.
- security.kerberos.login.principal: Kerberos principal name associated with the keytab.
- security.kerberos.login.use-ticket-cache: True or false, indicates whether to read from your Kerberos ticket cache.
For JavaAPI:
SecurityContext.install(catalogOptions);
HDFS HA #
Ensure that hdfs-site.xml
and core-site.xml
contain the necessary HA configuration.
HDFS ViewFS #
Ensure that hdfs-site.xml
and core-site.xml
contain the necessary ViewFs configuration.
OSS #
Download paimon-oss-1.0-SNAPSHOT.jar.If you have already configured oss access through Flink (Via Flink FileSystem), here you can skip the following configuration.
Put paimon-oss-1.0-SNAPSHOT.jar
into lib
directory of your Flink home, and create catalog:
CREATE CATALOG my_catalog WITH (
'type' = 'paimon',
'warehouse' = 'oss://<bucket>/<path>',
'fs.oss.endpoint' = 'oss-cn-hangzhou.aliyuncs.com',
'fs.oss.accessKeyId' = 'xxx',
'fs.oss.accessKeySecret' = 'yyy'
);
If you have already configured oss access through Spark (Via Hadoop FileSystem), here you can skip the following configuration.
Place paimon-oss-1.0-SNAPSHOT.jar
together with paimon-spark-1.0-SNAPSHOT.jar
under Spark’s jars directory, and start like
spark-sql \
--conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
--conf spark.sql.catalog.paimon.warehouse=oss://<bucket>/<path> \
--conf spark.sql.catalog.paimon.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com \
--conf spark.sql.catalog.paimon.fs.oss.accessKeyId=xxx \
--conf spark.sql.catalog.paimon.fs.oss.accessKeySecret=yyy
If you have already configured oss access through Hive (Via Hadoop FileSystem), here you can skip the following configuration.
NOTE: You need to ensure that Hive metastore can access oss
.
Place paimon-oss-1.0-SNAPSHOT.jar
together with paimon-hive-connector-1.0-SNAPSHOT.jar
under Hive’s auxlib directory, and start like
SET paimon.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com;
SET paimon.fs.oss.accessKeyId=xxx;
SET paimon.fs.oss.accessKeySecret=yyy;
And read table from hive metastore, table can be created by Flink or Spark, see Catalog with Hive Metastore
SELECT * FROM test_table;
SELECT COUNT(1) FROM test_table;
From version 0.8, paimon-trino uses trino filesystem as basic file read and write system. We strongly recommend you to use jindo-sdk in trino.
You can find How to config jindo sdk on trino here. Please note that:
- Use paimon to replace hive-hadoop2 when you decompress the plugin jar and find location to put in.
- You can specify the
core-site.xml
inpaimon.properties
on configuration hive.config.resources. - Presto and Jindo use the same configuration method.
S3 #
Download paimon-s3-1.0-SNAPSHOT.jar.If you have already configured s3 access through Flink (Via Flink FileSystem), here you can skip the following configuration.
Put paimon-s3-1.0-SNAPSHOT.jar
into lib
directory of your Flink home, and create catalog:
CREATE CATALOG my_catalog WITH (
'type' = 'paimon',
'warehouse' = 's3://<bucket>/<path>',
's3.endpoint' = 'your-endpoint-hostname',
's3.access-key' = 'xxx',
's3.secret-key' = 'yyy'
);
If you have already configured s3 access through Spark (Via Hadoop FileSystem), here you can skip the following configuration.
Place paimon-s3-1.0-SNAPSHOT.jar
together with paimon-spark-1.0-SNAPSHOT.jar
under Spark’s jars directory, and start like
spark-sql \
--conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
--conf spark.sql.catalog.paimon.warehouse=s3://<bucket>/<path> \
--conf spark.sql.catalog.paimon.s3.endpoint=your-endpoint-hostname \
--conf spark.sql.catalog.paimon.s3.access-key=xxx \
--conf spark.sql.catalog.paimon.s3.secret-key=yyy
If you have already configured s3 access through Hive ((Via Hadoop FileSystem)), here you can skip the following configuration.
NOTE: You need to ensure that Hive metastore can access s3
.
Place paimon-s3-1.0-SNAPSHOT.jar
together with paimon-hive-connector-1.0-SNAPSHOT.jar
under Hive’s auxlib directory, and start like
SET paimon.s3.endpoint=your-endpoint-hostname;
SET paimon.s3.access-key=xxx;
SET paimon.s3.secret-key=yyy;
And read table from hive metastore, table can be created by Flink or Spark, see Catalog with Hive Metastore
SELECT * FROM test_table;
SELECT COUNT(1) FROM test_table;
Paimon use shared trino filesystem as basic read and write system.
Please refer to Trino S3 to config s3 filesystem in trino.
S3 Complaint Object Stores #
The S3 Filesystem also support using S3 compliant object stores such as MinIO, Tencent’s COS and IBM’s Cloud Object Storage. Just configure your endpoint to the provider of the object store service.
s3.endpoint: your-endpoint-hostname
Configure Path Style Access #
Some S3 compliant object stores might not have virtual host style addressing enabled by default, for example when using Standalone MinIO for testing purpose. In such cases, you will have to provide the property to enable path style access.
s3.path.style.access: true
S3A Performance #
Tune Performance for S3AFileSystem
.
If you encounter the following exception:
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool.
Try to configure this in catalog options: fs.s3a.connection.maximum=1000
.