Modifier and Type | Method and Description |
---|---|
void |
DelegateCatalog.alterTable(Identifier identifier,
List<SchemaChange> changes,
boolean ignoreIfNotExists) |
void |
Catalog.alterTable(Identifier identifier,
List<SchemaChange> changes,
boolean ignoreIfNotExists)
Modify an existing table from
SchemaChange s. |
void |
AbstractCatalog.alterTable(Identifier identifier,
List<SchemaChange> changes,
boolean ignoreIfNotExists) |
void |
CachingCatalog.alterTable(Identifier identifier,
List<SchemaChange> changes,
boolean ignoreIfNotExists) |
default void |
Catalog.alterTable(Identifier identifier,
SchemaChange change,
boolean ignoreIfNotExists)
Modify an existing table from a
SchemaChange . |
protected void |
FileSystemCatalog.alterTableImpl(Identifier identifier,
List<SchemaChange> changes) |
protected abstract void |
AbstractCatalog.alterTableImpl(Identifier identifier,
List<SchemaChange> changes) |
void |
DelegateCatalog.createPartition(Identifier identifier,
Map<String,String> partitions) |
void |
Catalog.createPartition(Identifier identifier,
Map<String,String> partitionSpec)
Create the partition of the specify table.
|
void |
AbstractCatalog.createPartition(Identifier identifier,
Map<String,String> partitionSpec) |
void |
DelegateCatalog.dropPartition(Identifier identifier,
Map<String,String> partitions) |
void |
Catalog.dropPartition(Identifier identifier,
Map<String,String> partitions)
Drop the partition of the specify table.
|
void |
AbstractCatalog.dropPartition(Identifier identifier,
Map<String,String> partitionSpec) |
void |
CachingCatalog.dropPartition(Identifier identifier,
Map<String,String> partitions) |
void |
DelegateCatalog.dropTable(Identifier identifier,
boolean ignoreIfNotExists) |
void |
Catalog.dropTable(Identifier identifier,
boolean ignoreIfNotExists)
Drop a table.
|
void |
AbstractCatalog.dropTable(Identifier identifier,
boolean ignoreIfNotExists) |
void |
CachingCatalog.dropTable(Identifier identifier,
boolean ignoreIfNotExists) |
protected Table |
AbstractCatalog.getDataOrFormatTable(Identifier identifier) |
protected AbstractCatalog.TableMeta |
AbstractCatalog.getDataTableMeta(Identifier identifier) |
TableSchema |
FileSystemCatalog.getDataTableSchema(Identifier identifier) |
protected abstract TableSchema |
AbstractCatalog.getDataTableSchema(Identifier identifier) |
Table |
DelegateCatalog.getTable(Identifier identifier) |
Table |
Catalog.getTable(Identifier identifier)
Return a
Table identified by the given Identifier . |
Table |
AbstractCatalog.getTable(Identifier identifier) |
Table |
CachingCatalog.getTable(Identifier identifier) |
List<PartitionEntry> |
DelegateCatalog.listPartitions(Identifier identifier) |
List<PartitionEntry> |
Catalog.listPartitions(Identifier identifier)
Get PartitionEntry of all partitions of the table.
|
List<PartitionEntry> |
AbstractCatalog.listPartitions(Identifier identifier) |
List<PartitionEntry> |
CachingCatalog.listPartitions(Identifier identifier) |
void |
CachingCatalog.refreshPartitions(Identifier identifier) |
void |
DelegateCatalog.renameTable(Identifier fromTable,
Identifier toTable,
boolean ignoreIfNotExists) |
void |
Catalog.renameTable(Identifier fromTable,
Identifier toTable,
boolean ignoreIfNotExists)
Rename a table.
|
void |
AbstractCatalog.renameTable(Identifier fromTable,
Identifier toTable,
boolean ignoreIfNotExists) |
void |
CachingCatalog.renameTable(Identifier fromTable,
Identifier toTable,
boolean ignoreIfNotExists) |
void |
DelegateCatalog.repairTable(Identifier identifier) |
default void |
Catalog.repairTable(Identifier identifier) |
Modifier and Type | Method and Description |
---|---|
MultiTableScanBase.ScanResult |
MultiTableScanBase.scanTable(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> ctx) |
protected void |
MultiTableScanBase.updateTableMap() |
Modifier and Type | Method and Description |
---|---|
static CleanOrphanFilesResult |
FlinkOrphanFilesClean.executeDatabaseOrphanFiles(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env,
Catalog catalog,
long olderThanMillis,
SerializableConsumer<Path> fileCleaner,
Integer parallelism,
String databaseName,
String tableName) |
Modifier and Type | Method and Description |
---|---|
String[] |
RepairProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String identifier) |
org.apache.flink.types.Row[] |
RefreshObjectTableProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId) |
String[] |
ExpireSnapshotsProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
Integer retainMax,
Integer retainMin,
String olderThanStr,
Integer maxDeletes) |
String[] |
RollbackToTimestampProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
Long timestamp) |
String[] |
DropPartitionProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String... partitionStrings)
Deprecated.
|
String[] |
DeleteTagProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String tagNameStr) |
String[] |
MarkPartitionDoneProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String partitions) |
org.apache.flink.types.Row[] |
ExpireTagsProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String olderThanStr) |
String[] |
FastForwardProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String branchName) |
String[] |
DeleteBranchProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String branchStr) |
String[] |
ResetConsumerProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String consumerId,
Long nextSnapshotId) |
String[] |
RollbackToProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String tagName,
Long snapshotId) |
org.apache.flink.types.Row[] |
CreateTagFromWatermarkProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String tagName,
Long watermark,
String timeRetained) |
String[] |
CreateOrReplaceTagBaseProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String tagName,
Long snapshotId,
String timeRetained) |
org.apache.flink.types.Row[] |
CreateTagFromTimestampProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String tagName,
Long timestamp,
String timeRetained) |
String[] |
CreateBranchProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String branchName,
String tagName) |
String[] |
RenameTagProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String tagName,
String targetTagName) |
org.apache.flink.types.Row[] |
ExpirePartitionsProcedure.call(org.apache.flink.table.procedure.ProcedureContext procedureContext,
String tableId,
String expirationTime,
String timestampFormatter,
String timestampPattern,
String expireStrategy,
Integer maxExpires) |
void |
RepairProcedure.repairDatabasesOrTables(String databaseOrTables) |
protected Table |
ProcedureBase.table(String tableId) |
Modifier and Type | Method and Description |
---|---|
protected void |
HiveCatalog.alterTableImpl(Identifier identifier,
List<SchemaChange> changes) |
void |
HiveCatalog.dropPartition(Identifier identifier,
Map<String,String> partitionSpec) |
Table |
HiveCatalog.getDataOrFormatTable(Identifier identifier) |
protected AbstractCatalog.TableMeta |
HiveCatalog.getDataTableMeta(Identifier identifier) |
TableSchema |
HiveCatalog.getDataTableSchema(Identifier identifier) |
org.apache.hadoop.hive.metastore.api.Table |
HiveCatalog.getHmsTable(Identifier identifier) |
void |
HiveCatalog.repairTable(Identifier identifier) |
Modifier and Type | Method and Description |
---|---|
protected void |
JdbcCatalog.alterTableImpl(Identifier identifier,
List<SchemaChange> changes) |
protected TableSchema |
JdbcCatalog.getDataTableSchema(Identifier identifier) |
Modifier and Type | Method and Description |
---|---|
static List<LocalOrphanFilesClean> |
LocalOrphanFilesClean.createOrphanFilesCleans(Catalog catalog,
String databaseName,
String tableName,
long olderThanMillis,
SerializableConsumer<Path> fileCleaner,
Integer parallelism) |
static CleanOrphanFilesResult |
LocalOrphanFilesClean.executeDatabaseOrphanFiles(Catalog catalog,
String databaseName,
String tableName,
long olderThanMillis,
SerializableConsumer<Path> fileCleaner,
Integer parallelism) |
Modifier and Type | Method and Description |
---|---|
void |
PrivilegedCatalog.alterTable(Identifier identifier,
List<SchemaChange> changes,
boolean ignoreIfNotExists) |
void |
PrivilegedCatalog.dropPartition(Identifier identifier,
Map<String,String> partitions) |
void |
PrivilegedCatalog.dropTable(Identifier identifier,
boolean ignoreIfNotExists) |
Table |
PrivilegedCatalog.getTable(Identifier identifier) |
void |
PrivilegedCatalog.renameTable(Identifier fromTable,
Identifier toTable,
boolean ignoreIfNotExists) |
Modifier and Type | Method and Description |
---|---|
void |
RESTCatalog.alterTable(Identifier identifier,
List<SchemaChange> changes,
boolean ignoreIfNotExists) |
void |
RESTCatalog.createPartition(Identifier identifier,
Map<String,String> partitionSpec) |
void |
RESTCatalog.dropPartition(Identifier identifier,
Map<String,String> partitions) |
void |
RESTCatalog.dropTable(Identifier identifier,
boolean ignoreIfNotExists) |
Table |
RESTCatalog.getTable(Identifier identifier) |
List<PartitionEntry> |
RESTCatalog.listPartitions(Identifier identifier) |
void |
RESTCatalog.renameTable(Identifier fromTable,
Identifier toTable,
boolean ignoreIfNotExists) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
SchemaManager.commitChanges(List<SchemaChange> changes)
Update
SchemaChange s. |
Modifier and Type | Method and Description |
---|---|
void |
RepairProcedure.repairDatabasesOrTables(String databaseOrTables,
Catalog paimonCatalog) |
Copyright © 2023–2024 The Apache Software Foundation. All rights reserved.