Modifier and Type | Method and Description |
---|---|
BinaryRow |
UnawareAppendCompactionTask.partition() |
Constructor and Description |
---|
MultiTableUnawareAppendCompactionTask(BinaryRow partition,
List<DataFileMeta> files,
Identifier identifier) |
UnawareAppendCompactionTask(BinaryRow partition,
List<DataFileMeta> files) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
Projection.apply(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
KeyPartPartitionKeyExtractor.partition(InternalRow record) |
BinaryRow |
KeyPartPartitionKeyExtractor.trimmedPrimaryKey(InternalRow record) |
Modifier and Type | Method and Description |
---|---|
CloseableIterator<BinaryRow> |
GlobalIndexAssigner.endBoostrapWithoutEmit(boolean isEndInput) |
Modifier and Type | Method and Description |
---|---|
int |
BucketAssigner.assignBucket(BinaryRow part,
Filter<Integer> filter,
int maxCount) |
void |
BucketAssigner.bootstrapBucket(BinaryRow part,
int bucket) |
void |
BucketAssigner.decrement(BinaryRow part,
int bucket) |
boolean |
SkipNewExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
boolean |
ExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
boolean |
UseOldExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
boolean |
DeleteExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
Modifier and Type | Field and Description |
---|---|
static BinaryRow |
BinaryRow.EMPTY_ROW |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
BinaryRow.copy() |
BinaryRow |
BinaryRow.copy(BinaryRow reuse) |
BinaryRow |
PartitionInfo.getPartitionRow() |
static BinaryRow |
BinaryRow.singleColumn(BinaryString string) |
static BinaryRow |
BinaryRow.singleColumn(Integer i) |
static BinaryRow |
BinaryRow.singleColumn(String string) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
BinaryRow.copy(BinaryRow reuse) |
Constructor and Description |
---|
BinaryRowWriter(BinaryRow row) |
BinaryRowWriter(BinaryRow row,
int initialSize) |
PartitionInfo(int[] map,
RowType partitionType,
BinaryRow partition) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
BinaryRowSerializer.copy(BinaryRow from) |
BinaryRow |
BinaryRowSerializer.createInstance() |
BinaryRow |
BinaryRowSerializer.deserialize(BinaryRow reuse,
DataInputView source) |
BinaryRow |
BinaryRowSerializer.deserialize(DataInputView source) |
BinaryRow |
BinaryRowSerializer.deserializeFromPages(AbstractPagedInputView headerLessView) |
BinaryRow |
BinaryRowSerializer.deserializeFromPages(BinaryRow reuse,
AbstractPagedInputView headerLessView) |
BinaryRow |
BinaryRowSerializer.mapFromPages(BinaryRow reuse,
AbstractPagedInputView headerLessView) |
BinaryRow |
BinaryRowSerializer.toBinaryRow(BinaryRow rowData) |
BinaryRow |
InternalRowSerializer.toBinaryRow(InternalRow row)
Convert
InternalRow into BinaryRow . |
abstract BinaryRow |
AbstractRowDataSerializer.toBinaryRow(T rowData)
Convert a
InternalRow to a BinaryRow . |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
BinaryRowSerializer.copy(BinaryRow from) |
BinaryRow |
BinaryRowSerializer.deserialize(BinaryRow reuse,
DataInputView source) |
BinaryRow |
BinaryRowSerializer.deserializeFromPages(BinaryRow reuse,
AbstractPagedInputView headerLessView) |
BinaryRow |
BinaryRowSerializer.mapFromPages(BinaryRow reuse,
AbstractPagedInputView headerLessView) |
void |
BinaryRowSerializer.pointTo(int length,
BinaryRow reuse,
AbstractPagedInputView headerLessView)
Point row to memory segments with offset(in the AbstractPagedInputView) and length.
|
void |
BinaryRowSerializer.serialize(BinaryRow record,
DataOutputView target) |
int |
BinaryRowSerializer.serializeToPages(BinaryRow record,
AbstractPagedOutputView headerLessView) |
static void |
BinaryRowSerializer.serializeWithoutLengthSlow(BinaryRow record,
MemorySegmentWritable out) |
BinaryRow |
BinaryRowSerializer.toBinaryRow(BinaryRow rowData) |
Modifier and Type | Method and Description |
---|---|
DeletionVectorsMaintainer |
DeletionVectorsMaintainer.Factory.createOrRestore(Long snapshotId,
BinaryRow partition) |
DeletionVectorsMaintainer |
DeletionVectorsMaintainer.Factory.createOrRestore(Long snapshotId,
BinaryRow partition,
int bucket) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
BucketedAppendDeletionFileMaintainer.getPartition() |
BinaryRow |
UnawareAppendDeletionFileMaintainer.getPartition() |
BinaryRow |
AppendDeletionFileMaintainer.getPartition() |
Modifier and Type | Method and Description |
---|---|
static BucketedAppendDeletionFileMaintainer |
AppendDeletionFileMaintainer.forBucketedAppend(IndexFileHandler indexFileHandler,
Long snapshotId,
BinaryRow partition,
int bucket) |
static UnawareAppendDeletionFileMaintainer |
AppendDeletionFileMaintainer.forUnawareAppend(IndexFileHandler indexFileHandler,
Long snapshotId,
BinaryRow partition) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
ExternalBuffer.BufferIterator.getRow() |
BinaryRow |
RowBuffer.RowBufferIterator.getRow() |
BinaryRow |
ChannelReaderInputViewIterator.next() |
BinaryRow |
ChannelReaderInputViewIterator.next(BinaryRow reuse) |
Modifier and Type | Method and Description |
---|---|
MutableObjectIterator<BinaryRow> |
ChannelReaderInputView.createBinaryRowIterator(BinaryRowSerializer serializer) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
ChannelReaderInputViewIterator.next(BinaryRow reuse) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
ChangelogCompactTask.partition() |
Constructor and Description |
---|
ChangelogCompactTask(long checkpointId,
BinaryRow partition,
Map<Integer,List<DataFileMeta>> newFileChangelogFiles,
Map<Integer,List<DataFileMeta>> compactChangelogFiles) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
FixedBucketFromPkExtractor.logPrimaryKey() |
BinaryRow |
FixedBucketFromPkExtractor.partition() |
BinaryRow |
DynamicPartitionLoader.partition() |
BinaryRow |
FixedBucketFromPkExtractor.trimmedPrimaryKey() |
Modifier and Type | Method and Description |
---|---|
InternalRow |
RemoteTableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
RowDataKeyAndBucketExtractor.logPrimaryKey() |
BinaryRow |
RowDataKeyAndBucketExtractor.partition() |
BinaryRow |
StoreSinkWriteState.StateValue.partition() |
BinaryRow |
RowDataKeyAndBucketExtractor.trimmedPrimaryKey() |
Modifier and Type | Method and Description |
---|---|
int |
RowDataChannelComputer.channel(BinaryRow partition,
int bucket) |
void |
StoreSinkWrite.compact(BinaryRow partition,
int bucket,
boolean fullCompaction) |
void |
GlobalFullCompactionSinkWrite.compact(BinaryRow partition,
int bucket,
boolean fullCompaction) |
void |
StoreSinkWriteImpl.compact(BinaryRow partition,
int bucket,
boolean fullCompaction) |
boolean |
StoreSinkWriteState.StateValueFilter.filter(String tableName,
BinaryRow partition,
int bucket) |
void |
StoreSinkWrite.notifyNewFiles(long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> files) |
void |
StoreSinkWriteImpl.notifyNewFiles(long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> files) |
DataFileMeta |
RewriteFileIndexSink.FileIndexProcessor.process(BinaryRow partition,
int bucket,
DataFileMeta dataFileMeta) |
Constructor and Description |
---|
StateValue(BinaryRow partition,
int bucket,
byte[] value) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
CdcRecordKeyAndBucketExtractor.logPrimaryKey() |
BinaryRow |
CdcRecordKeyAndBucketExtractor.partition() |
BinaryRow |
CdcRecordPartitionKeyExtractor.partition(CdcRecord record) |
BinaryRow |
CdcRecordKeyAndBucketExtractor.trimmedPrimaryKey() |
BinaryRow |
CdcRecordPartitionKeyExtractor.trimmedPrimaryKey(CdcRecord record) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
PlaceholderSplit.partition() |
Modifier and Type | Method and Description |
---|---|
void |
HiveMetastoreClient.addPartition(BinaryRow partition) |
Modifier and Type | Method and Description |
---|---|
void |
HiveMetastoreClient.addPartitions(List<BinaryRow> partitions) |
Constructor and Description |
---|
MigrateTask(FileIO fileIO,
String format,
String location,
FileStoreTable paimonTable,
BinaryRow partitionRow,
Path newDir,
Map<Path,Path> rollback) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
IcebergDataFileMeta.partition() |
Modifier and Type | Method and Description |
---|---|
static IcebergDataFileMeta |
IcebergDataFileMeta.create(IcebergDataFileMeta.Content content,
String filePath,
String fileFormat,
BinaryRow partition,
long recordCount,
long fileSizeInBytes,
IcebergSchema icebergSchema,
SimpleStats stats) |
Modifier and Type | Method and Description |
---|---|
Map<Pair<BinaryRow,Integer>,List<IndexFileMeta>> |
IndexFileHandler.scan(long snapshot,
String indexType,
Set<BinaryRow> partitions) |
Map<Pair<BinaryRow,Integer>,List<IndexFileMeta>> |
IndexFileHandler.scan(Snapshot snapshot,
String indexType,
Set<BinaryRow> partitions) |
Modifier and Type | Method and Description |
---|---|
int |
HashBucketAssigner.assign(BinaryRow partition,
int hash)
Assign a bucket for key hash of a record.
|
int |
SimpleHashBucketAssigner.assign(BinaryRow partition,
int hash) |
int |
BucketAssigner.assign(BinaryRow partition,
int hash) |
IndexMaintainer<KeyValue> |
HashIndexMaintainer.Factory.createOrRestore(Long snapshotId,
BinaryRow partition,
int bucket) |
IndexMaintainer<T> |
IndexMaintainer.Factory.createOrRestore(Long snapshotId,
BinaryRow partition,
int bucket) |
static PartitionIndex |
PartitionIndex.loadIndex(IndexFileHandler indexFileHandler,
BinaryRow partition,
long targetBucketRowNumber,
java.util.function.IntPredicate loadFilter,
java.util.function.IntPredicate bucketFilter) |
List<IndexFileMeta> |
IndexFileHandler.scan(long snapshotId,
String indexType,
BinaryRow partition,
int bucket) |
Map<String,DeletionFile> |
IndexFileHandler.scanDVIndex(Long snapshotId,
BinaryRow partition,
int bucket) |
List<IndexManifestEntry> |
IndexFileHandler.scanEntries(long snapshotId,
String indexType,
BinaryRow partition) |
List<IndexManifestEntry> |
IndexFileHandler.scanEntries(String indexType,
BinaryRow partition) |
Optional<IndexFileMeta> |
IndexFileHandler.scanHashIndex(long snapshotId,
BinaryRow partition,
int bucket) |
Modifier and Type | Method and Description |
---|---|
Map<Pair<BinaryRow,Integer>,List<IndexFileMeta>> |
IndexFileHandler.scan(long snapshot,
String indexType,
Set<BinaryRow> partitions) |
Map<Pair<BinaryRow,Integer>,List<IndexFileMeta>> |
IndexFileHandler.scan(Snapshot snapshot,
String indexType,
Set<BinaryRow> partitions) |
List<IndexManifestEntry> |
IndexFileHandler.scanEntries(long snapshot,
String indexType,
Set<BinaryRow> partitions) |
List<IndexManifestEntry> |
IndexFileHandler.scanEntries(Snapshot snapshot,
String indexType,
Set<BinaryRow> partitions) |
Modifier and Type | Field and Description |
---|---|
static BinaryRow |
DataFileMeta.EMPTY_MAX_KEY |
static BinaryRow |
DataFileMeta.EMPTY_MIN_KEY |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
DataFileMeta.maxKey() |
BinaryRow |
DataFileMeta.minKey() |
Modifier and Type | Method and Description |
---|---|
KeyValueFileWriterFactory |
KeyValueFileWriterFactory.Builder.build(BinaryRow partition,
int bucket,
CoreOptions options) |
KeyValueFileReaderFactory |
KeyValueFileReaderFactory.Builder.build(BinaryRow partition,
int bucket,
DeletionVector.Factory dvFactory) |
KeyValueFileReaderFactory |
KeyValueFileReaderFactory.Builder.build(BinaryRow partition,
int bucket,
DeletionVector.Factory dvFactory,
boolean projectKeys,
List<Predicate> filters) |
Constructor and Description |
---|
DataFileMeta(String fileName,
long fileSize,
long rowCount,
BinaryRow minKey,
BinaryRow maxKey,
SimpleStats keyStats,
SimpleStats valueStats,
long minSequenceNumber,
long maxSequenceNumber,
long schemaId,
int level,
List<String> extraFiles,
Long deleteRowCount,
byte[] embeddedIndex,
FileSource fileSource,
List<String> valueStatsCols) |
DataFileMeta(String fileName,
long fileSize,
long rowCount,
BinaryRow minKey,
BinaryRow maxKey,
SimpleStats keyStats,
SimpleStats valueStats,
long minSequenceNumber,
long maxSequenceNumber,
long schemaId,
int level,
List<String> extraFiles,
Timestamp creationTime,
Long deleteRowCount,
byte[] embeddedIndex,
FileSource fileSource,
List<String> valueStatsCols) |
DataFileMeta(String fileName,
long fileSize,
long rowCount,
BinaryRow minKey,
BinaryRow maxKey,
SimpleStats keyStats,
SimpleStats valueStats,
long minSequenceNumber,
long maxSequenceNumber,
long schemaId,
int level,
Long deleteRowCount,
byte[] embeddedIndex,
FileSource fileSource,
List<String> valueStatsCols) |
Modifier and Type | Field and Description |
---|---|
BinaryRow |
FileEntry.Identifier.partition |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
SimpleFileEntry.maxKey() |
BinaryRow |
FileEntry.maxKey() |
BinaryRow |
ManifestEntry.maxKey() |
BinaryRow |
SimpleFileEntry.minKey() |
BinaryRow |
FileEntry.minKey() |
BinaryRow |
ManifestEntry.minKey() |
BinaryRow |
SimpleFileEntry.partition() |
BinaryRow |
FileEntry.partition() |
BinaryRow |
IndexManifestEntry.partition() |
BinaryRow |
PartitionEntry.partition() |
BinaryRow |
BucketEntry.partition() |
BinaryRow |
ManifestEntry.partition() |
Modifier and Type | Method and Description |
---|---|
static java.util.function.Function<InternalRow,BinaryRow> |
ManifestEntrySerializer.partitionGetter() |
Modifier and Type | Method and Description |
---|---|
static PartitionEntry |
PartitionEntry.fromDataFile(BinaryRow partition,
FileKind kind,
DataFileMeta file) |
boolean |
ManifestCacheFilter.test(BinaryRow partition,
int bucket) |
Modifier and Type | Method and Description |
---|---|
static void |
BucketEntry.merge(Collection<BucketEntry> from,
Map<Pair<BinaryRow,Integer>,BucketEntry> to) |
static void |
PartitionEntry.merge(Collection<PartitionEntry> from,
Map<BinaryRow,PartitionEntry> to) |
Constructor and Description |
---|
BucketEntry(BinaryRow partition,
int bucket,
long recordCount,
long fileSizeInBytes,
long fileCount,
long lastFileCreationTime) |
Identifier(BinaryRow partition,
int bucket,
int level,
String fileName,
List<String> extraFiles,
byte[] embeddedIndex) |
IndexManifestEntry(FileKind kind,
BinaryRow partition,
int bucket,
IndexFileMeta indexFile) |
ManifestEntry(FileKind kind,
BinaryRow partition,
int bucket,
int totalBuckets,
DataFileMeta file) |
PartitionEntry(BinaryRow partition,
long recordCount,
long fileSizeInBytes,
long fileCount,
long lastFileCreationTime) |
SimpleFileEntry(FileKind kind,
BinaryRow partition,
int bucket,
int level,
String fileName,
List<String> extraFiles,
byte[] embeddedIndex,
BinaryRow minKey,
BinaryRow maxKey) |
Modifier and Type | Method and Description |
---|---|
static String |
LookupFile.localFilePrefix(RowType partitionType,
BinaryRow partition,
int bucket,
String remoteFileName) |
Modifier and Type | Method and Description |
---|---|
void |
MetastoreClient.addPartition(BinaryRow partition) |
Modifier and Type | Method and Description |
---|---|
default void |
MetastoreClient.addPartitions(List<BinaryRow> partitions) |
Modifier and Type | Method and Description |
---|---|
static BinaryRow |
FileMetaUtils.writePartitionValue(RowType partitionRowType,
Map<String,String> partitionValues,
List<BinaryWriter.ValueSetter> valueSetters,
String partitionDefaultName) |
Modifier and Type | Method and Description |
---|---|
static CommitMessage |
FileMetaUtils.commitFile(BinaryRow partition,
List<DataFileMeta> dataFileMetas) |
Modifier and Type | Field and Description |
---|---|
protected BinaryRow |
FileStoreWrite.State.partition |
Modifier and Type | Field and Description |
---|---|
protected Map<BinaryRow,Set<Integer>> |
FileDeletionBase.deletionBuckets |
protected Map<BinaryRow,Map<Integer,AbstractFileStoreWrite.WriterContainer<T>>> |
AbstractFileStoreWrite.writers |
Modifier and Type | Method and Description |
---|---|
Map<BinaryRow,List<Integer>> |
AbstractFileStoreWrite.getActiveBuckets() |
static Map<BinaryRow,Map<Integer,List<DataFileMeta>>> |
FileStoreScan.Plan.groupByPartFiles(List<ManifestEntry> files)
Return a map group by partition and bucket.
|
default List<BinaryRow> |
FileStoreScan.listPartitions() |
Modifier and Type | Method and Description |
---|---|
void |
FileStoreWrite.compact(BinaryRow partition,
int bucket,
boolean fullCompaction)
Compact data stored in given partition and bucket.
|
void |
AbstractFileStoreWrite.compact(BinaryRow partition,
int bucket,
boolean fullCompaction) |
List<DataFileMeta> |
AppendOnlyFileStoreWrite.compactRewrite(BinaryRow partition,
int bucket,
java.util.function.Function<String,DeletionVector> dvFactory,
List<DataFileMeta> toCompact) |
RecordReader<KeyValue> |
MergeFileSplitRead.createMergeReader(BinaryRow partition,
int bucket,
List<DataFileMeta> files,
List<DeletionFile> deletionFiles,
boolean keepDelete) |
RecordReader<KeyValue> |
MergeFileSplitRead.createNoMergeReader(BinaryRow partition,
int bucket,
List<DataFileMeta> files,
List<DeletionFile> deletionFiles,
boolean onlyFilterKey) |
RecordReader<InternalRow> |
RawFileSplitRead.createReader(BinaryRow partition,
int bucket,
List<DataFileMeta> files,
List<IOExceptionSupplier<DeletionVector>> dvFactories) |
protected abstract RecordWriter<T> |
AbstractFileStoreWrite.createWriter(Long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> restoreFiles,
long restoredMaxSeqNumber,
CommitIncrement restoreIncrement,
ExecutorService compactExecutor,
DeletionVectorsMaintainer deletionVectorsMaintainer) |
protected MergeTreeWriter |
KeyValueFileStoreWrite.createWriter(Long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> restoreFiles,
long restoredMaxSeqNumber,
CommitIncrement restoreIncrement,
ExecutorService compactExecutor,
DeletionVectorsMaintainer dvMaintainer) |
protected RecordWriter<InternalRow> |
AppendOnlyFileStoreWrite.createWriter(Long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> restoredFiles,
long restoredMaxSeqNumber,
CommitIncrement restoreIncrement,
ExecutorService compactExecutor,
DeletionVectorsMaintainer dvMaintainer) |
AbstractFileStoreWrite.WriterContainer<T> |
AbstractFileStoreWrite.createWriterContainer(BinaryRow partition,
int bucket,
boolean ignorePreviousFiles) |
protected CompactManager |
AppendOnlyUnawareBucketFileStoreWrite.getCompactManager(BinaryRow partition,
int bucket,
List<DataFileMeta> restoredFiles,
ExecutorService compactExecutor,
DeletionVectorsMaintainer dvMaintainer) |
protected CompactManager |
AppendOnlyFixedBucketFileStoreWrite.getCompactManager(BinaryRow partition,
int bucket,
List<DataFileMeta> restoredFiles,
ExecutorService compactExecutor,
DeletionVectorsMaintainer dvMaintainer) |
protected abstract CompactManager |
AppendOnlyFileStoreWrite.getCompactManager(BinaryRow partition,
int bucket,
List<DataFileMeta> restoredFiles,
ExecutorService compactExecutor,
DeletionVectorsMaintainer dvMaintainer) |
protected AbstractFileStoreWrite.WriterContainer<T> |
AbstractFileStoreWrite.getWriterWrapper(BinaryRow partition,
int bucket) |
void |
FileStoreWrite.notifyNewFiles(long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> files)
Notify that some new files are created at given snapshot in given bucket.
|
void |
AbstractFileStoreWrite.notifyNewFiles(long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> files) |
FileStoreScan |
FileStoreScan.withPartitionBucket(BinaryRow partition,
int bucket) |
FileStoreScan |
AbstractFileStoreScan.withPartitionBucket(BinaryRow partition,
int bucket) |
void |
FileStoreWrite.write(BinaryRow partition,
int bucket,
T data)
Write the data to the store according to the partition and bucket.
|
void |
AbstractFileStoreWrite.write(BinaryRow partition,
int bucket,
T data) |
void |
BundleFileStoreWriter.writeBundle(BinaryRow partition,
int bucket,
BundleRecords bundle)
Write the batch data to the store according to the partition and bucket.
|
void |
AppendOnlyFileStoreWrite.writeBundle(BinaryRow partition,
int bucket,
BundleRecords bundle) |
Modifier and Type | Method and Description |
---|---|
protected void |
FileDeletionBase.addMergedDataFiles(Map<BinaryRow,Map<Integer,Set<String>>> dataFiles,
Snapshot snapshot)
NOTE: This method is used for building data file skipping set.
|
protected boolean |
FileDeletionBase.containsDataFile(Map<BinaryRow,Map<Integer,Set<String>>> dataFiles,
ManifestEntry testee) |
boolean |
PartitionExpire.isValueAllExpired(Collection<BinaryRow> partitions) |
FileStoreScan |
FileStoreScan.withPartitionFilter(List<BinaryRow> partitions) |
FileStoreScan |
AbstractFileStoreScan.withPartitionFilter(List<BinaryRow> partitions) |
ManifestsReader |
ManifestsReader.withPartitionFilter(List<BinaryRow> partitions) |
Constructor and Description |
---|
State(BinaryRow partition,
int bucket,
long baseSnapshotId,
long lastModifiedCommitIdentifier,
Collection<DataFileMeta> dataFiles,
long maxSequenceNumber,
IndexMaintainer<T> indexMaintainer,
DeletionVectorsMaintainer deletionVectorsMaintainer,
CommitIncrement commitIncrement) |
Modifier and Type | Method and Description |
---|---|
protected static Map<BinaryRow,Set<Integer>> |
CommitStats.changedPartBuckets(List<ManifestEntry>... changes) |
protected static List<BinaryRow> |
CommitStats.changedPartitions(List<ManifestEntry>... changes) |
Modifier and Type | Method and Description |
---|---|
CompactionMetrics.Reporter |
CompactionMetrics.createReporter(BinaryRow partition,
int bucket) |
Modifier and Type | Method and Description |
---|---|
Object[] |
PartitionExpireStrategy.convertPartition(BinaryRow partition) |
static PartitionInfo |
PartitionUtils.create(Pair<int[],RowType> pair,
BinaryRow binaryRow) |
static Predicate |
PartitionPredicate.createPartitionPredicate(RowType partitionType,
BinaryRow partition) |
boolean |
PartitionValuesTimeExpireStrategy.isExpired(java.time.LocalDateTime expireDateTime,
BinaryRow partition) |
boolean |
PartitionPredicate.test(BinaryRow part) |
boolean |
PartitionPredicate.DefaultPartitionPredicate.test(BinaryRow part) |
boolean |
PartitionPredicate.MultiplePartitionPredicate.test(BinaryRow part) |
Modifier and Type | Method and Description |
---|---|
static PartitionPredicate |
PartitionPredicate.fromMultiple(RowType partitionType,
List<BinaryRow> partitions) |
static PartitionPredicate |
PartitionPredicate.fromMultiple(RowType partitionType,
Set<BinaryRow> partitions) |
Modifier and Type | Method and Description |
---|---|
InetSocketAddress |
QueryLocationImpl.getLocation(BinaryRow partition,
int bucket,
boolean forceUpdate) |
InetSocketAddress |
QueryLocation.getLocation(BinaryRow partition,
int bucket,
boolean forceUpdate)
Get location from partition and bucket.
|
Modifier and Type | Method and Description |
---|---|
CompletableFuture<BinaryRow[]> |
KvQueryClient.getValues(BinaryRow partition,
int bucket,
BinaryRow[] keys) |
CompletableFuture<BinaryRow[]> |
KvQueryClient.getValues(BinaryRow partition,
int bucket,
BinaryRow[] keys) |
Modifier and Type | Method and Description |
---|---|
BinaryRow[] |
KvRequest.keys() |
BinaryRow |
KvRequest.partition() |
BinaryRow[] |
KvResponse.values() |
Constructor and Description |
---|
KvRequest(BinaryRow partition,
int bucket,
BinaryRow[] keys) |
KvRequest(BinaryRow partition,
int bucket,
BinaryRow[] keys) |
KvResponse(BinaryRow[] values) |
Modifier and Type | Field and Description |
---|---|
protected BinaryRow |
BinaryIndexedSortable.row1 |
Modifier and Type | Method and Description |
---|---|
protected MutableObjectIterator<BinaryRow> |
BinaryExternalMerger.channelReaderInputViewIterator(ChannelReaderInputView inView) |
protected Comparator<BinaryRow> |
BinaryExternalMerger.mergeComparator() |
protected List<BinaryRow> |
BinaryExternalMerger.mergeReusedEntries(int size) |
MutableObjectIterator<BinaryRow> |
SortBuffer.sortedIterator() |
MutableObjectIterator<BinaryRow> |
BinaryExternalSortBuffer.sortedIterator() |
MutableObjectIterator<BinaryRow> |
BinaryInMemorySortBuffer.sortedIterator() |
Modifier and Type | Method and Description |
---|---|
void |
BinaryExternalSortBuffer.write(MutableObjectIterator<BinaryRow> iterator) |
protected void |
BinaryExternalMerger.writeMergingOutput(MutableObjectIterator<BinaryRow> mergeIterator,
AbstractPagedOutputView output) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
SimpleStats.maxValues() |
BinaryRow |
SimpleStats.minValues() |
Constructor and Description |
---|
SimpleStats(BinaryRow minValues,
BinaryRow maxValues,
BinaryArray nullCounts) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
LocalTableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key)
TODO remove synchronized and supports multiple thread to lookup.
|
InternalRow |
TableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key) |
void |
LocalTableQuery.refreshFiles(BinaryRow partition,
int bucket,
List<DataFileMeta> beforeFiles,
List<DataFileMeta> dataFiles) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
TableWrite.getPartition(InternalRow row)
Calculate which partition
row belongs to. |
BinaryRow |
TableWriteImpl.getPartition(InternalRow row) |
BinaryRow |
KeyAndBucketExtractor.logPrimaryKey() |
BinaryRow |
RowKeyExtractor.logPrimaryKey() |
BinaryRow |
SinkRecord.partition() |
BinaryRow |
KeyAndBucketExtractor.partition() |
BinaryRow |
CommitMessageImpl.partition() |
BinaryRow |
RowKeyExtractor.partition() |
BinaryRow |
CommitMessage.partition()
Partition of this commit message.
|
BinaryRow |
RowPartitionKeyExtractor.partition(InternalRow record) |
BinaryRow |
PartitionKeyExtractor.partition(T record) |
BinaryRow |
SinkRecord.primaryKey() |
BinaryRow |
KeyAndBucketExtractor.trimmedPrimaryKey() |
BinaryRow |
RowKeyExtractor.trimmedPrimaryKey() |
BinaryRow |
RowPartitionKeyExtractor.trimmedPrimaryKey(InternalRow record) |
BinaryRow |
PartitionKeyExtractor.trimmedPrimaryKey(T record) |
Modifier and Type | Method and Description |
---|---|
static int |
KeyAndBucketExtractor.bucketKeyHashCode(BinaryRow bucketKey) |
void |
TableWrite.compact(BinaryRow partition,
int bucket,
boolean fullCompaction)
Compact a bucket of a partition.
|
void |
TableWriteImpl.compact(BinaryRow partition,
int bucket,
boolean fullCompaction) |
void |
TableWriteImpl.notifyNewFiles(long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> files)
Notify that some new files are created at given snapshot in given bucket.
|
static int |
ChannelComputer.select(BinaryRow partition,
int bucket,
int numChannels) |
void |
TableWrite.writeBundle(BinaryRow partition,
int bucket,
BundleRecords bundle)
Write a bundle records directly, not per row.
|
void |
TableWriteImpl.writeBundle(BinaryRow partition,
int bucket,
BundleRecords bundle) |
Constructor and Description |
---|
CommitMessageImpl(BinaryRow partition,
int bucket,
DataIncrement dataIncrement,
CompactIncrement compactIncrement) |
CommitMessageImpl(BinaryRow partition,
int bucket,
DataIncrement dataIncrement,
CompactIncrement compactIncrement,
IndexIncrement indexIncrement) |
SinkRecord(BinaryRow partition,
int bucket,
BinaryRow primaryKey,
InternalRow row) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
DataSplit.partition() |
Modifier and Type | Method and Description |
---|---|
default List<BinaryRow> |
TableScan.listPartitions()
List partitions.
|
Modifier and Type | Method and Description |
---|---|
DataSplit.Builder |
DataSplit.Builder.withPartition(BinaryRow partition) |
Modifier and Type | Method and Description |
---|---|
default InnerTableScan |
InnerTableScan.withPartitionFilter(List<BinaryRow> partitions) |
AbstractDataTableScan |
AbstractDataTableScan.withPartitionFilter(List<BinaryRow> partitions) |
Modifier and Type | Method and Description |
---|---|
List<BinaryRow> |
SnapshotReader.partitions()
List partitions.
|
List<BinaryRow> |
SnapshotReaderImpl.partitions() |
Modifier and Type | Method and Description |
---|---|
SnapshotReader |
SnapshotReader.withPartitionFilter(List<BinaryRow> partitions) |
SnapshotReader |
SnapshotReaderImpl.withPartitionFilter(List<BinaryRow> partitions) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
FileMonitorTable.FileChange.partition() |
Constructor and Description |
---|
FileChange(long snapshotId,
BinaryRow partition,
int bucket,
List<DataFileMeta> beforeFiles,
List<DataFileMeta> dataFiles) |
Modifier and Type | Field and Description |
---|---|
static BinaryRow |
BinaryRowDataUtil.EMPTY_ROW |
Modifier and Type | Method and Description |
---|---|
static BinaryRow |
SerializationUtils.deserializeBinaryRow(byte[] bytes)
Schemaless deserialization for
BinaryRow . |
static BinaryRow |
SerializationUtils.deserializeBinaryRow(DataInputView input)
Schemaless deserialization for
BinaryRow from a DataInputView . |
Modifier and Type | Method and Description |
---|---|
InternalRow |
ProjectToRowFunction.apply(InternalRow input,
BinaryRow project) |
Path |
FileStorePathFactory.bucketPath(BinaryRow partition,
int bucket) |
DataFilePathFactory |
FileStorePathFactory.createDataFilePathFactory(BinaryRow partition,
int bucket) |
static ColumnVector |
VectorMappingUtils.createFixedVector(DataType dataType,
BinaryRow partition,
int index) |
List<Path> |
FileStorePathFactory.getHierarchicalPartitionPath(BinaryRow partition) |
String |
FileStorePathFactory.getPartitionString(BinaryRow partition)
IMPORTANT: This method is NOT THREAD SAFE.
|
static String |
InternalRowPartitionComputer.partToSimpleString(RowType partitionType,
BinaryRow partition,
String delimiter,
int maxLength) |
Path |
FileStorePathFactory.relativePartitionAndBucketPath(BinaryRow partition,
int bucket) |
static byte[] |
SerializationUtils.serializeBinaryRow(BinaryRow row)
Serialize
BinaryRow , the difference between this and BinaryRowSerializer is
that arity is also serialized here, so the deserialization is schemaless. |
static void |
SerializationUtils.serializeBinaryRow(BinaryRow row,
DataOutputView out)
Serialize
BinaryRow to a DataOutputView . |
Copyright © 2023–2024 The Apache Software Foundation. All rights reserved.