Modifier and Type | Class and Description |
---|---|
class |
PartitionSettedRow
An implementation of
InternalRow which provides a row the fixed partition value. |
Modifier and Type | Field and Description |
---|---|
protected InternalRow |
PartitionSettedRow.row |
Modifier and Type | Method and Description |
---|---|
InternalRow |
PartitionSettedRow.getRow(int pos,
int numFields) |
InternalRow |
KeyValue.key() |
InternalRow |
KeyValueSerializer.toRow(InternalRow key,
long sequenceNumber,
RowKind valueKind,
InternalRow value) |
InternalRow |
KeyValueSerializer.toRow(KeyValue record) |
InternalRow |
KeyValue.value() |
Modifier and Type | Method and Description |
---|---|
Comparator<InternalRow> |
AppendOnlyFileStore.newKeyComparator() |
Comparator<InternalRow> |
KeyValueFileStore.newKeyComparator() |
Modifier and Type | Method and Description |
---|---|
KeyValue |
KeyValueSerializer.fromRow(InternalRow row) |
KeyValue |
KeyValue.replace(InternalRow key,
long sequenceNumber,
RowKind valueKind,
InternalRow value) |
KeyValue |
KeyValue.replace(InternalRow key,
RowKind valueKind,
InternalRow value) |
KeyValue |
KeyValue.replaceKey(InternalRow key) |
PartitionSettedRow |
PartitionSettedRow.replaceRow(InternalRow row)
Replaces the underlying
InternalRow backing this PartitionSettedRow . |
KeyValue |
KeyValue.replaceValue(InternalRow value) |
static String |
KeyValue.rowDataToString(InternalRow row,
RowType type) |
InternalRow |
KeyValueSerializer.toRow(InternalRow key,
long sequenceNumber,
RowKind valueKind,
InternalRow value) |
Modifier and Type | Method and Description |
---|---|
void |
AppendOnlyWriter.write(InternalRow rowData) |
Constructor and Description |
---|
AppendOnlyWriter(FileIO fileIO,
IOManager ioManager,
long schemaId,
FileFormat fileFormat,
long targetFileSize,
RowType writeSchema,
long maxSequenceNumber,
CompactManager compactManager,
IOFunction<List<DataFileMeta>,RecordReaderIterator<InternalRow>> bucketFileRead,
boolean forceCompact,
DataFilePathFactory pathFactory,
CommitIncrement increment,
boolean useWriteBuffer,
boolean spillable,
String fileCompression,
CompressOptions spillCompression,
SimpleColStatsCollector.Factory[] statsCollectors,
MemorySize maxDiskSize,
FileIndexOptions fileIndexOptions,
boolean asyncFileWrite,
boolean statsDenseStore) |
Modifier and Type | Method and Description |
---|---|
Iterator<InternalRow> |
ArrowBundleRecords.iterator() |
Modifier and Type | Field and Description |
---|---|
protected RecordReader.RecordIterator<InternalRow> |
ArrowBatchConverter.iterator |
Modifier and Type | Method and Description |
---|---|
void |
ArrowPerRowBatchConverter.reset(RecordReader.RecordIterator<InternalRow> iterator) |
Modifier and Type | Method and Description |
---|---|
Iterable<InternalRow> |
ArrowBatchReader.readBatch(org.apache.arrow.vector.VectorSchemaRoot vsr) |
Modifier and Type | Method and Description |
---|---|
boolean |
ArrowFormatCWriter.write(InternalRow currentRow) |
boolean |
ArrowFormatWriter.write(InternalRow currentRow) |
Modifier and Type | Class and Description |
---|---|
class |
CastedRow
An implementation of
InternalRow which provides a casted view of the underlying InternalRow . |
class |
DefaultValueRow
An implementation of
InternalRow which provides a default value for the underlying InternalRow . |
Modifier and Type | Method and Description |
---|---|
InternalRow |
DefaultValueRow.defaultValueRow() |
InternalRow |
CastedRow.getRow(int pos,
int numFields) |
InternalRow |
DefaultValueRow.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
static DefaultValueRow |
DefaultValueRow.from(InternalRow defaultValueRow) |
<V> V |
CastFieldGetter.getFieldOrNull(InternalRow row) |
CastedRow |
CastedRow.replaceRow(InternalRow row)
Replaces the underlying
InternalRow backing this CastedRow . |
DefaultValueRow |
DefaultValueRow.replaceRow(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
BinaryRow |
Projection.apply(InternalRow row) |
int |
RecordComparator.compare(InternalRow o1,
InternalRow o2) |
boolean |
RecordEqualiser.equals(InternalRow row1,
InternalRow row2)
Returns
true if the rows are equal to each other and false otherwise. |
void |
NormalizedKeyComputer.putKey(InternalRow record,
MemorySegment target,
int offset)
Writes a normalized key for the given record into the target
MemorySegment . |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
IndexBootstrap.bootstrap(int numAssigners,
int assignId) |
Modifier and Type | Method and Description |
---|---|
void |
GlobalIndexAssigner.bootstrapKey(InternalRow value) |
BinaryRow |
KeyPartPartitionKeyExtractor.partition(InternalRow record) |
boolean |
SkipNewExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
boolean |
ExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
boolean |
UseOldExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
boolean |
DeleteExistingProcessor.processExists(InternalRow newRow,
BinaryRow previousPart,
int previousBucket) |
void |
GlobalIndexAssigner.processInput(InternalRow value) |
BinaryRow |
KeyPartPartitionKeyExtractor.trimmedPrimaryKey(InternalRow record) |
Constructor and Description |
---|
DeleteExistingProcessor(ProjectToRowFunction setPartition,
BucketAssigner bucketAssigner,
java.util.function.BiConsumer<InternalRow,Integer> collector) |
SkipNewExistingProcessor(java.util.function.BiConsumer<InternalRow,Integer> collector) |
UseOldExistingProcessor(ProjectToRowFunction setPartition,
java.util.function.BiConsumer<InternalRow,Integer> collector) |
Modifier and Type | Class and Description |
---|---|
class |
BinaryRow
An implementation of
InternalRow which is backed by MemorySegment instead of
Object. |
class |
GenericRow
An internal data structure representing data of
RowType . |
class |
JoinedRow
An implementation of
InternalRow which is backed by two concatenated InternalRow . |
class |
LazyGenericRow
A
InternalRow which lazy init fields. |
class |
NestedRow
Its memory storage structure is exactly the same with
BinaryRow . |
Modifier and Type | Method and Description |
---|---|
InternalRow |
GenericArray.getRow(int pos,
int numFields) |
InternalRow |
DataGetters.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
InternalRow |
BinaryArray.getRow(int pos,
int numFields) |
InternalRow |
LazyGenericRow.getRow(int pos,
int numFields) |
InternalRow |
NestedRow.getRow(int pos,
int numFields) |
InternalRow |
JoinedRow.getRow(int pos,
int numFields) |
InternalRow |
GenericRow.getRow(int pos,
int numFields) |
InternalRow |
BinaryRow.getRow(int pos,
int numFields) |
InternalRow |
JoinedRow.row1() |
InternalRow |
JoinedRow.row2() |
Modifier and Type | Method and Description |
---|---|
NestedRow |
NestedRow.copy(InternalRow reuse) |
Object |
InternalRow.FieldGetter.getFieldOrNull(InternalRow row) |
static JoinedRow |
JoinedRow.join(InternalRow row1,
InternalRow row2) |
JoinedRow |
JoinedRow.replace(InternalRow row1,
InternalRow row2)
Replaces the
InternalRow backing this JoinedRow . |
void |
BinaryWriter.writeRow(int pos,
InternalRow value,
InternalRowSerializer serializer) |
Constructor and Description |
---|
JoinedRow(InternalRow row1,
InternalRow row2)
Creates a new
JoinedRow of kind RowKind.INSERT backed by row1 and row2. |
JoinedRow(RowKind rowKind,
InternalRow row1,
InternalRow row2)
Creates a new
JoinedRow of kind RowKind.INSERT backed by row1 and row2. |
Modifier and Type | Class and Description |
---|---|
class |
ColumnarRow
Columnar row to support access to vector column data.
|
Modifier and Type | Method and Description |
---|---|
InternalRow |
RowColumnVector.getRow(int i) |
InternalRow |
ColumnarRow.getRow(int pos,
int numFields) |
InternalRow |
VectorizedColumnBatch.getRow(int rowId,
int colId) |
InternalRow |
ColumnarArray.getRow(int pos,
int numFields) |
InternalRow |
ColumnarRowIterator.next() |
Modifier and Type | Class and Description |
---|---|
class |
SafeBinaryRow
A
BinaryRow which is safe avoid core dump. |
Modifier and Type | Method and Description |
---|---|
InternalRow |
SafeBinaryArray.getRow(int pos,
int numFields) |
InternalRow |
SafeBinaryRow.getRow(int pos,
int numFields) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractRowDataSerializer<T extends InternalRow>
Row serializer, provided paged serialize paged method.
|
Modifier and Type | Method and Description |
---|---|
InternalRow |
RowCompactedSerializer.copy(InternalRow from) |
InternalRow |
InternalRowSerializer.copy(InternalRow from) |
InternalRow |
InternalRowSerializer.copyRowData(InternalRow from,
InternalRow reuse) |
InternalRow |
RowCompactedSerializer.deserialize(byte[] bytes) |
InternalRow |
RowCompactedSerializer.deserialize(DataInputView source) |
InternalRow |
InternalRowSerializer.deserialize(DataInputView source) |
InternalRow |
InternalRowSerializer.deserializeFromPages(AbstractPagedInputView source) |
InternalRow |
InternalRowSerializer.deserializeFromPages(InternalRow reuse,
AbstractPagedInputView source) |
InternalRow |
InternalRowSerializer.mapFromPages(InternalRow reuse,
AbstractPagedInputView source) |
Modifier and Type | Method and Description |
---|---|
Serializer<InternalRow> |
RowCompactedSerializer.duplicate() |
Modifier and Type | Method and Description |
---|---|
InternalRow |
RowCompactedSerializer.copy(InternalRow from) |
InternalRow |
InternalRowSerializer.copy(InternalRow from) |
InternalRow |
InternalRowSerializer.copyRowData(InternalRow from,
InternalRow reuse) |
InternalRow |
InternalRowSerializer.deserializeFromPages(InternalRow reuse,
AbstractPagedInputView source) |
InternalRow |
InternalRowSerializer.mapFromPages(InternalRow reuse,
AbstractPagedInputView source) |
void |
RowCompactedSerializer.serialize(InternalRow record,
DataOutputView target) |
void |
InternalRowSerializer.serialize(InternalRow row,
DataOutputView target) |
byte[] |
RowCompactedSerializer.serializeToBytes(InternalRow record) |
int |
InternalRowSerializer.serializeToPages(InternalRow row,
AbstractPagedOutputView target) |
BinaryRow |
InternalRowSerializer.toBinaryRow(InternalRow row)
Convert
InternalRow into BinaryRow . |
Modifier and Type | Method and Description |
---|---|
InternalRow |
ApplyDeletionFileRecordIterator.next() |
Modifier and Type | Method and Description |
---|---|
FileRecordIterator<InternalRow> |
ApplyDeletionFileRecordIterator.iterator() |
RecordReader.RecordIterator<InternalRow> |
ApplyDeletionVectorReader.readBatch() |
RecordReader<InternalRow> |
ApplyDeletionVectorReader.reader() |
Constructor and Description |
---|
ApplyDeletionFileRecordIterator(FileRecordIterator<InternalRow> iterator,
DeletionVector deletionVector) |
ApplyDeletionVectorReader(RecordReader<InternalRow> reader,
DeletionVector deletionVector) |
Modifier and Type | Method and Description |
---|---|
boolean |
ExternalBuffer.put(InternalRow row) |
boolean |
InMemoryBuffer.put(InternalRow row) |
boolean |
RowBuffer.put(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
static RowBuffer |
RowBuffer.getBuffer(IOManager ioManager,
MemorySegmentPool memoryPool,
AbstractRowDataSerializer<InternalRow> serializer,
boolean spillable,
MemorySize maxDiskSize,
CompressOptions compression) |
Modifier and Type | Class and Description |
---|---|
class |
FlinkRowWrapper
Convert from Flink row data.
|
Modifier and Type | Method and Description |
---|---|
InternalRow |
FlinkRowWrapper.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
FlinkRowData |
FlinkRowData.replace(InternalRow row) |
Constructor and Description |
---|
FlinkRowData(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
CompactedChangelogFormatReaderFactory.createReader(FormatReaderFactory.Context context) |
Modifier and Type | Field and Description |
---|---|
protected RocksDBValueState<InternalRow,InternalRow> |
PrimaryKeyLookupTable.tableState |
protected RocksDBValueState<InternalRow,InternalRow> |
PrimaryKeyLookupTable.tableState |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
IncrementalCompactDiffSplitRead.createReader(DataSplit split) |
List<InternalRow> |
FullCacheLookupTable.get(InternalRow key) |
List<InternalRow> |
PrimaryKeyPartialLookupTable.get(InternalRow key) |
List<InternalRow> |
LookupTable.get(InternalRow key) |
List<InternalRow> |
NoPrimaryKeyLookupTable.innerGet(InternalRow key) |
List<InternalRow> |
PrimaryKeyLookupTable.innerGet(InternalRow key) |
abstract List<InternalRow> |
FullCacheLookupTable.innerGet(InternalRow key) |
List<InternalRow> |
SecondaryIndexLookupTable.innerGet(InternalRow key) |
RecordReader<InternalRow> |
LookupStreamingReader.nextBatch(boolean useParallelism) |
RecordReader<InternalRow> |
LookupCompactDiffRead.reader(Split split) |
Modifier and Type | Method and Description |
---|---|
List<InternalRow> |
FullCacheLookupTable.get(InternalRow key) |
List<InternalRow> |
PrimaryKeyPartialLookupTable.get(InternalRow key) |
List<InternalRow> |
LookupTable.get(InternalRow key) |
List<InternalRow> |
NoPrimaryKeyLookupTable.innerGet(InternalRow key) |
List<InternalRow> |
PrimaryKeyLookupTable.innerGet(InternalRow key) |
abstract List<InternalRow> |
FullCacheLookupTable.innerGet(InternalRow key) |
List<InternalRow> |
SecondaryIndexLookupTable.innerGet(InternalRow key) |
protected void |
NoPrimaryKeyLookupTable.refreshRow(InternalRow row,
Predicate predicate) |
protected void |
PrimaryKeyLookupTable.refreshRow(InternalRow row,
Predicate predicate) |
protected abstract void |
FullCacheLookupTable.refreshRow(InternalRow row,
Predicate predicate) |
protected void |
SecondaryIndexLookupTable.refreshRow(InternalRow row,
Predicate predicate) |
void |
FixedBucketFromPkExtractor.setRecord(InternalRow record) |
byte[] |
NoPrimaryKeyLookupTable.toKeyBytes(InternalRow row) |
byte[] |
PrimaryKeyLookupTable.toKeyBytes(InternalRow row) |
abstract byte[] |
FullCacheLookupTable.toKeyBytes(InternalRow row) |
byte[] |
NoPrimaryKeyLookupTable.toValueBytes(InternalRow row) |
byte[] |
PrimaryKeyLookupTable.toValueBytes(InternalRow row) |
abstract byte[] |
FullCacheLookupTable.toValueBytes(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
void |
NoPrimaryKeyLookupTable.refresh(Iterator<InternalRow> incremental) |
void |
FullCacheLookupTable.refresh(Iterator<InternalRow> input) |
protected void |
FileStoreLookupFunction.setCacheRowFilter(Filter<InternalRow> cacheRowFilter) |
void |
FullCacheLookupTable.specifyCacheRowFilter(Filter<InternalRow> filter) |
void |
PrimaryKeyPartialLookupTable.specifyCacheRowFilter(Filter<InternalRow> filter) |
void |
LookupTable.specifyCacheRowFilter(Filter<InternalRow> filter) |
Constructor and Description |
---|
LookupStreamingReader(LookupFileStoreTable table,
int[] projection,
Predicate predicate,
Set<Integer> requireCachedBucketIds,
Filter<InternalRow> cacheRowFilter) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
RemoteTableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
RemoteTableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key) |
Modifier and Type | Method and Description |
---|---|
static org.apache.flink.streaming.api.datastream.DataStream<InternalRow> |
QueryFileMonitor.build(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env,
Table table) |
static ChannelComputer<InternalRow> |
QueryFileMonitor.createChannelComputer() |
Modifier and Type | Method and Description |
---|---|
void |
QueryAddressRegister.invoke(InternalRow row,
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) |
Modifier and Type | Method and Description |
---|---|
void |
QueryExecutorOperator.processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord<InternalRow> streamRecord) |
void |
QueryFileMonitor.run(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<InternalRow> ctx) |
Modifier and Type | Method and Description |
---|---|
protected ChannelComputer<InternalRow> |
RowDynamicBucketSink.assignerChannelComputer(Integer numAssigners) |
protected ChannelComputer<org.apache.flink.api.java.tuple.Tuple2<InternalRow,Integer>> |
RowDynamicBucketSink.channelComputer2() |
protected org.apache.flink.streaming.api.operators.OneInputStreamOperator<InternalRow,Committable> |
RowUnawareBucketSink.createWriteOperator(StoreSinkWrite.Provider writeProvider,
String commitUser) |
protected org.apache.flink.streaming.api.operators.OneInputStreamOperator<InternalRow,Committable> |
FixedBucketSink.createWriteOperator(StoreSinkWrite.Provider writeProvider,
String commitUser) |
protected org.apache.flink.streaming.api.operators.OneInputStreamOperator<org.apache.flink.api.java.tuple.Tuple2<InternalRow,Integer>,Committable> |
RowDynamicBucketSink.createWriteOperator(StoreSinkWrite.Provider writeProvider,
String commitUser) |
protected SerializableFunction<TableSchema,PartitionKeyExtractor<InternalRow>> |
RowDynamicBucketSink.extractorFunction() |
protected org.apache.flink.streaming.api.datastream.DataStream<InternalRow> |
FlinkSinkBuilder.mapToInternalRow(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.table.data.RowData> input,
RowType rowType) |
Modifier and Type | Method and Description |
---|---|
int |
RowDataChannelComputer.channel(InternalRow record) |
int |
RowAssignerChannelComputer.channel(InternalRow record) |
SinkRecord |
StoreSinkWrite.write(InternalRow rowData) |
SinkRecord |
GlobalFullCompactionSinkWrite.write(InternalRow rowData) |
SinkRecord |
StoreSinkWriteImpl.write(InternalRow rowData) |
SinkRecord |
StoreSinkWrite.write(InternalRow rowData,
int bucket) |
SinkRecord |
GlobalFullCompactionSinkWrite.write(InternalRow rowData,
int bucket) |
SinkRecord |
StoreSinkWriteImpl.write(InternalRow rowData,
int bucket) |
Modifier and Type | Method and Description |
---|---|
org.apache.flink.streaming.api.datastream.DataStreamSink<?> |
DynamicBucketCompactSink.build(org.apache.flink.streaming.api.datastream.DataStream<InternalRow> input,
Integer parallelism) |
protected org.apache.flink.streaming.api.datastream.DataStreamSink<?> |
FlinkSinkBuilder.buildDynamicBucketSink(org.apache.flink.streaming.api.datastream.DataStream<InternalRow> input,
boolean globalIndex) |
protected org.apache.flink.streaming.api.datastream.DataStreamSink<?> |
FlinkSinkBuilder.buildForFixedBucket(org.apache.flink.streaming.api.datastream.DataStream<InternalRow> input) |
int |
RowWithBucketChannelComputer.channel(org.apache.flink.api.java.tuple.Tuple2<InternalRow,Integer> record) |
void |
LocalMergeOperator.processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord<InternalRow> record) |
void |
RowDataStoreWriteOperator.processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord<InternalRow> element) |
void |
DynamicBucketRowWriteOperator.processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord<org.apache.flink.api.java.tuple.Tuple2<InternalRow,Integer>> element) |
Modifier and Type | Method and Description |
---|---|
protected org.apache.flink.streaming.api.operators.OneInputStreamOperator<org.apache.flink.api.java.tuple.Tuple2<InternalRow,Integer>,Committable> |
GlobalDynamicBucketSink.createWriteOperator(StoreSinkWrite.Provider writeProvider,
String commitUser) |
Modifier and Type | Method and Description |
---|---|
org.apache.flink.streaming.api.datastream.DataStreamSink<?> |
GlobalDynamicBucketSink.build(org.apache.flink.streaming.api.datastream.DataStream<InternalRow> input,
Integer parallelism) |
int |
KeyPartRowChannelComputer.channel(org.apache.flink.api.java.tuple.Tuple2<KeyPartOrRow,InternalRow> record) |
void |
GlobalIndexAssignerOperator.processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord<org.apache.flink.api.java.tuple.Tuple2<KeyPartOrRow,InternalRow>> streamRecord) |
Constructor and Description |
---|
IndexBootstrapOperator(IndexBootstrap bootstrap,
SerializableFunction<InternalRow,T> converter) |
Modifier and Type | Method and Description |
---|---|
void |
SortOperator.processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord<InternalRow> element) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
InternalRowTypeSerializer.copy(InternalRow from) |
InternalRow |
InternalRowTypeSerializer.copy(InternalRow from,
InternalRow reuse) |
InternalRow |
InternalRowTypeSerializer.createInstance() |
InternalRow |
InternalRowTypeSerializer.deserialize(org.apache.flink.core.memory.DataInputView source) |
InternalRow |
InternalRowTypeSerializer.deserialize(InternalRow reuse,
org.apache.flink.core.memory.DataInputView source) |
Modifier and Type | Method and Description |
---|---|
org.apache.flink.api.common.typeutils.TypeSerializer<InternalRow> |
InternalRowTypeSerializer.duplicate() |
static InternalTypeInfo<InternalRow> |
InternalTypeInfo.fromRowType(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
InternalRowTypeSerializer.copy(InternalRow from) |
InternalRow |
InternalRowTypeSerializer.copy(InternalRow from,
InternalRow reuse) |
InternalRow |
InternalRowTypeSerializer.deserialize(InternalRow reuse,
org.apache.flink.core.memory.DataInputView source) |
void |
InternalRowTypeSerializer.serialize(InternalRow record,
org.apache.flink.core.memory.DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
FormatReaderFactory.createReader(FormatReaderFactory.Context context) |
Modifier and Type | Method and Description |
---|---|
void |
FormatWriter.addElement(InternalRow element)
Adds an element to the encoder.
|
void |
SimpleStatsCollector.collect(InternalRow row)
Update the statistics with a new row data.
|
Modifier and Type | Method and Description |
---|---|
InternalRow |
FieldReaderFactory.RowReader.read(org.apache.avro.io.Decoder decoder,
Object reuse) |
InternalRow |
AvroRowDatumReader.read(InternalRow reuse,
org.apache.avro.io.Decoder in) |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
AvroBulkFormat.createReader(FormatReaderFactory.Context context) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
AvroRowDatumReader.read(InternalRow reuse,
org.apache.avro.io.Decoder in) |
void |
AvroRowDatumWriter.write(InternalRow datum,
org.apache.avro.io.Encoder out) |
void |
FieldWriterFactory.RowWriter.writeRow(InternalRow row,
org.apache.avro.io.Encoder encoder) |
Constructor and Description |
---|
OrcWriterFactory(Vectorizer<InternalRow> vectorizer,
Properties writerProperties,
org.apache.hadoop.conf.Configuration configuration,
int writeBatchSize)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration, ORC
writer properties.
|
Modifier and Type | Method and Description |
---|---|
void |
OrcBulkWriter.addElement(InternalRow element) |
void |
RowDataVectorizer.vectorize(InternalRow row,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
Constructor and Description |
---|
OrcBulkWriter(Vectorizer<InternalRow> vectorizer,
org.apache.orc.Writer writer,
PositionOutputStream underlyingStream,
int batchSize) |
Constructor and Description |
---|
ParquetWriterFactory(ParquetBuilder<InternalRow> writerBuilder)
Creates a new ParquetWriterFactory using the given builder to assemble the ParquetWriter.
|
Modifier and Type | Method and Description |
---|---|
ParquetWriter<InternalRow> |
RowDataParquetBuilder.createWriter(org.apache.parquet.io.OutputFile out,
String compression) |
protected org.apache.parquet.hadoop.api.WriteSupport<InternalRow> |
ParquetRowDataBuilder.getWriteSupport(org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
ParquetBulkWriter.addElement(InternalRow datum) |
void |
ParquetRowDataWriter.write(InternalRow record)
It writes a record to Parquet.
|
Constructor and Description |
---|
ParquetBulkWriter(ParquetWriter<InternalRow> parquetWriter)
Creates a new ParquetBulkWriter wrapping the given ParquetWriter.
|
Modifier and Type | Method and Description |
---|---|
InternalRow |
RowDataContainer.get() |
Modifier and Type | Method and Description |
---|---|
void |
RowDataContainer.set(InternalRow rowData) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
HivePaimonArray.getRow(int i,
int i1) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
IcebergDataFileMetaSerializer.toRow(IcebergDataFileMeta file) |
InternalRow |
IcebergManifestEntrySerializer.toRow(IcebergManifestEntry entry) |
InternalRow |
IcebergManifestFileMetaSerializer.toRow(IcebergManifestFileMeta file) |
InternalRow |
IcebergPartitionSummarySerializer.toRow(IcebergPartitionSummary record) |
Modifier and Type | Method and Description |
---|---|
IcebergManifestEntry |
IcebergManifestEntrySerializer.fromRow(InternalRow row) |
IcebergDataFileMeta |
IcebergDataFileMetaSerializer.fromRow(InternalRow row) |
IcebergPartitionSummary |
IcebergPartitionSummarySerializer.fromRow(InternalRow row) |
IcebergManifestFileMeta |
IcebergManifestFileMetaSerializer.fromRow(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
IndexFileMetaSerializer.toRow(IndexFileMeta record) |
Modifier and Type | Method and Description |
---|---|
IndexFileMeta |
IndexFileMetaSerializer.fromRow(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
DataFileMetaSerializer.toRow(DataFileMeta meta) |
InternalRow |
IdentifierSerializer.toRow(Identifier record) |
protected InternalRow |
SingleFileWriter.writeImpl(T record) |
Modifier and Type | Method and Description |
---|---|
static RecordReader<InternalRow> |
SplitsParallelReadUtil.parallelExecute(RowType projectedType,
FunctionWithIOException<Split,RecordReader<InternalRow>> readBuilder,
List<Split> splits,
int pageSize,
int parallelism) |
static <EXTRA> RecordReader<InternalRow> |
SplitsParallelReadUtil.parallelExecute(RowType projectedType,
FunctionWithIOException<Split,RecordReader<InternalRow>> readBuilder,
List<Split> splits,
int pageSize,
int parallelism,
java.util.function.Function<Split,EXTRA> extraFunction,
java.util.function.BiFunction<InternalRow,EXTRA,InternalRow> addExtraToRow) |
RecordReader.RecordIterator<InternalRow> |
FileRecordReader.readBatch() |
Modifier and Type | Method and Description |
---|---|
Identifier |
IdentifierSerializer.fromRow(InternalRow rowData) |
DataFileMeta |
DataFileMetaSerializer.fromRow(InternalRow row) |
void |
DataFileIndexWriter.write(InternalRow row) |
void |
RowDataFileWriter.write(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
static RecordReader<InternalRow> |
SplitsParallelReadUtil.parallelExecute(RowType projectedType,
FunctionWithIOException<Split,RecordReader<InternalRow>> readBuilder,
List<Split> splits,
int pageSize,
int parallelism) |
static <EXTRA> RecordReader<InternalRow> |
SplitsParallelReadUtil.parallelExecute(RowType projectedType,
FunctionWithIOException<Split,RecordReader<InternalRow>> readBuilder,
List<Split> splits,
int pageSize,
int parallelism,
java.util.function.Function<Split,EXTRA> extraFunction,
java.util.function.BiFunction<InternalRow,EXTRA,InternalRow> addExtraToRow) |
static <EXTRA> RecordReader<InternalRow> |
SplitsParallelReadUtil.parallelExecute(RowType projectedType,
FunctionWithIOException<Split,RecordReader<InternalRow>> readBuilder,
List<Split> splits,
int pageSize,
int parallelism,
java.util.function.Function<Split,EXTRA> extraFunction,
java.util.function.BiFunction<InternalRow,EXTRA,InternalRow> addExtraToRow) |
static <EXTRA> RecordReader<InternalRow> |
SplitsParallelReadUtil.parallelExecute(RowType projectedType,
FunctionWithIOException<Split,RecordReader<InternalRow>> readBuilder,
List<Split> splits,
int pageSize,
int parallelism,
java.util.function.Function<Split,EXTRA> extraFunction,
java.util.function.BiFunction<InternalRow,EXTRA,InternalRow> addExtraToRow) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
IndexManifestEntrySerializer.convertTo(IndexManifestEntry record) |
InternalRow |
ManifestEntrySerializer.convertTo(ManifestEntry entry) |
InternalRow |
ManifestFileMetaSerializer.convertTo(ManifestFileMeta meta) |
InternalRow |
SimpleFileEntrySerializer.convertTo(SimpleFileEntry meta) |
Modifier and Type | Method and Description |
---|---|
static java.util.function.Function<InternalRow,Integer> |
ManifestEntrySerializer.bucketGetter() |
static Filter<InternalRow> |
FileEntry.deletedFilter() |
static java.util.function.Function<InternalRow,String> |
ManifestEntrySerializer.fileNameGetter() |
static java.util.function.Function<InternalRow,FileKind> |
ManifestEntrySerializer.kindGetter() |
static java.util.function.Function<InternalRow,Integer> |
ManifestEntrySerializer.levelGetter() |
static java.util.function.Function<InternalRow,BinaryRow> |
ManifestEntrySerializer.partitionGetter() |
static java.util.function.Function<InternalRow,Integer> |
ManifestEntrySerializer.totalBucketGetter() |
Modifier and Type | Method and Description |
---|---|
IndexManifestEntry |
IndexManifestEntrySerializer.convertFrom(int version,
InternalRow row) |
ManifestEntry |
ManifestEntrySerializer.convertFrom(int version,
InternalRow row) |
SimpleFileEntry |
SimpleFileEntrySerializer.convertFrom(int version,
InternalRow row) |
ManifestFileMeta |
ManifestFileMetaSerializer.convertFrom(int version,
InternalRow row) |
Modifier and Type | Method and Description |
---|---|
static InternalRow |
MemorySegmentUtils.readRowData(MemorySegment[] segments,
int numFields,
int baseOffset,
long offsetAndSize)
Gets an instance of
InternalRow from underlying MemorySegment . |
Modifier and Type | Method and Description |
---|---|
static <T> T |
LookupUtils.lookup(Comparator<InternalRow> keyComparator,
InternalRow target,
SortedRun level,
BiFunctionWithIOE<InternalRow,DataFileMeta,T> lookup) |
T |
LookupLevels.lookup(InternalRow key,
int startLevel) |
static <T> T |
LookupUtils.lookup(Levels levels,
InternalRow key,
int startLevel,
BiFunctionWithIOE<InternalRow,SortedRun,T> lookup,
BiFunctionWithIOE<InternalRow,TreeSet<DataFileMeta>,T> level0Lookup) |
static <T> T |
LookupUtils.lookupLevel0(Comparator<InternalRow> keyComparator,
InternalRow target,
TreeSet<DataFileMeta> level0,
BiFunctionWithIOE<InternalRow,DataFileMeta,T> lookup) |
boolean |
SortBufferWriteBuffer.put(long sequenceNumber,
RowKind valueKind,
InternalRow key,
InternalRow value) |
boolean |
WriteBuffer.put(long sequenceNumber,
RowKind valueKind,
InternalRow key,
InternalRow value)
Put a record with sequence number and value kind.
|
T |
LookupLevels.ValueProcessor.readFromDisk(InternalRow key,
int level,
byte[] valueBytes,
String fileName) |
KeyValue |
LookupLevels.KeyValueProcessor.readFromDisk(InternalRow key,
int level,
byte[] bytes,
String fileName) |
Boolean |
LookupLevels.ContainsValueProcessor.readFromDisk(InternalRow key,
int level,
byte[] bytes,
String fileName) |
LookupLevels.PositionedKeyValue |
LookupLevels.PositionedKeyValueProcessor.readFromDisk(InternalRow key,
int level,
byte[] bytes,
String fileName) |
Modifier and Type | Method and Description |
---|---|
void |
SortBufferWriteBuffer.forEach(Comparator<InternalRow> keyComparator,
MergeFunction<KeyValue> mergeFunction,
WriteBuffer.KvConsumer rawConsumer,
WriteBuffer.KvConsumer mergedConsumer) |
void |
WriteBuffer.forEach(Comparator<InternalRow> keyComparator,
MergeFunction<KeyValue> mergeFunction,
WriteBuffer.KvConsumer rawConsumer,
WriteBuffer.KvConsumer mergedConsumer)
Performs the given action for each remaining element in this buffer until all elements have
been processed or the action throws an exception.
|
static SortedRun |
SortedRun.fromUnsorted(List<DataFileMeta> unsortedFiles,
Comparator<InternalRow> keyComparator) |
static <T> T |
LookupUtils.lookup(Comparator<InternalRow> keyComparator,
InternalRow target,
SortedRun level,
BiFunctionWithIOE<InternalRow,DataFileMeta,T> lookup) |
static <T> T |
LookupUtils.lookup(Comparator<InternalRow> keyComparator,
InternalRow target,
SortedRun level,
BiFunctionWithIOE<InternalRow,DataFileMeta,T> lookup) |
static <T> T |
LookupUtils.lookup(Levels levels,
InternalRow key,
int startLevel,
BiFunctionWithIOE<InternalRow,SortedRun,T> lookup,
BiFunctionWithIOE<InternalRow,TreeSet<DataFileMeta>,T> level0Lookup) |
static <T> T |
LookupUtils.lookup(Levels levels,
InternalRow key,
int startLevel,
BiFunctionWithIOE<InternalRow,SortedRun,T> lookup,
BiFunctionWithIOE<InternalRow,TreeSet<DataFileMeta>,T> level0Lookup) |
static <T> T |
LookupUtils.lookupLevel0(Comparator<InternalRow> keyComparator,
InternalRow target,
TreeSet<DataFileMeta> level0,
BiFunctionWithIOE<InternalRow,DataFileMeta,T> lookup) |
static <T> T |
LookupUtils.lookupLevel0(Comparator<InternalRow> keyComparator,
InternalRow target,
TreeSet<DataFileMeta> level0,
BiFunctionWithIOE<InternalRow,DataFileMeta,T> lookup) |
<T> RecordReader<T> |
MergeSorter.mergeSort(List<SizedReaderSupplier<KeyValue>> lazyReaders,
Comparator<InternalRow> keyComparator,
FieldsComparator userDefinedSeqComparator,
MergeFunctionWrapper<T> mergeFunction) |
<T> RecordReader<T> |
MergeSorter.mergeSortNoSpill(List<? extends ReaderSupplier<KeyValue>> lazyReaders,
Comparator<InternalRow> keyComparator,
FieldsComparator userDefinedSeqComparator,
MergeFunctionWrapper<T> mergeFunction) |
static <T> RecordReader<T> |
MergeTreeReaders.readerForMergeTree(List<List<SortedRun>> sections,
FileReaderFactory<KeyValue> readerFactory,
Comparator<InternalRow> userKeyComparator,
FieldsComparator userDefinedSeqComparator,
MergeFunctionWrapper<T> mergeFunctionWrapper,
MergeSorter mergeSorter) |
static <T> RecordReader<T> |
MergeTreeReaders.readerForSection(List<SortedRun> section,
FileReaderFactory<KeyValue> readerFactory,
Comparator<InternalRow> userKeyComparator,
FieldsComparator userDefinedSeqComparator,
MergeFunctionWrapper<T> mergeFunctionWrapper,
MergeSorter mergeSorter) |
void |
SortedRun.validate(Comparator<InternalRow> comparator) |
Constructor and Description |
---|
Levels(Comparator<InternalRow> keyComparator,
List<DataFileMeta> inputFiles,
int numLevels) |
LookupLevels(Levels levels,
Comparator<InternalRow> keyComparator,
RowType keyType,
LookupLevels.ValueProcessor<T> valueProcessor,
IOFunction<DataFileMeta,RecordReader<KeyValue>> fileReaderFactory,
java.util.function.Function<String,File> localFileFactory,
LookupStoreFactory lookupStoreFactory,
java.util.function.Function<Long,BloomFilter.Builder> bfGenerator,
org.apache.paimon.shade.caffeine2.com.github.benmanes.caffeine.cache.Cache<String,LookupFile> lookupFileCache) |
MergeTreeWriter(boolean writeBufferSpillable,
MemorySize maxDiskSize,
int sortMaxFan,
CompressOptions sortCompression,
IOManager ioManager,
CompactManager compactManager,
long maxSequenceNumber,
Comparator<InternalRow> keyComparator,
MergeFunction<KeyValue> mergeFunction,
KeyValueFileWriterFactory writerFactory,
boolean commitForceCompact,
CoreOptions.ChangelogProducer changelogProducer,
CommitIncrement increment,
FieldsComparator userDefinedSeqComparator) |
Modifier and Type | Field and Description |
---|---|
protected Comparator<InternalRow> |
MergeTreeCompactRewriter.keyComparator |
Modifier and Type | Method and Description |
---|---|
static <T> SortMergeReader<T> |
SortMergeReader.createSortMergeReader(List<RecordReader<KeyValue>> readers,
Comparator<InternalRow> userKeyComparator,
FieldsComparator userDefinedSeqComparator,
MergeFunctionWrapper<T> mergeFunctionWrapper,
CoreOptions.SortEngine sortEngine) |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
DefaultValueAssigner.assignFieldsDefaultValue(RecordReader<InternalRow> reader)
assign default value for column which value is null.
|
Constructor and Description |
---|
FileStoreCommitImpl(FileIO fileIO,
SchemaManager schemaManager,
String tableName,
String commitUser,
RowType partitionType,
String partitionDefaultName,
FileStorePathFactory pathFactory,
SnapshotManager snapshotManager,
ManifestFile.Factory manifestFileFactory,
ManifestList.Factory manifestListFactory,
IndexManifestFile.Factory indexManifestFileFactory,
FileStoreScan scan,
int numBucket,
MemorySize manifestTargetSize,
MemorySize manifestFullCompactionSize,
int manifestMergeMinCount,
boolean dynamicPartitionOverwrite,
Comparator<InternalRow> keyComparator,
String branchName,
StatsFileHandler statsFileHandler,
BucketMode bucketMode,
Integer manifestReadParallelism,
List<CommitCallback> commitCallbacks,
int commitMaxRetries) |
KeyValueFileStoreWrite(FileIO fileIO,
SchemaManager schemaManager,
TableSchema schema,
String commitUser,
RowType partitionType,
RowType keyType,
RowType valueType,
java.util.function.Supplier<Comparator<InternalRow>> keyComparatorSupplier,
java.util.function.Supplier<FieldsComparator> udsComparatorSupplier,
java.util.function.Supplier<RecordEqualiser> logDedupEqualSupplier,
MergeFunctionFactory<KeyValue> mfFactory,
FileStorePathFactory pathFactory,
Map<String,FileStorePathFactory> format2PathFactory,
SnapshotManager snapshotManager,
FileStoreScan scan,
IndexMaintainer.Factory<KeyValue> indexFactory,
DeletionVectorsMaintainer.Factory deletionVectorsMaintainerFactory,
CoreOptions options,
KeyValueFieldsExtractor extractor,
String tableName) |
MergeFileSplitRead(CoreOptions options,
TableSchema schema,
RowType keyType,
RowType valueType,
Comparator<InternalRow> keyComparator,
MergeFunctionFactory<KeyValue> mfFactory,
KeyValueFileReaderFactory.Builder readerFactoryBuilder) |
Modifier and Type | Method and Description |
---|---|
boolean |
PartitionPredicate.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts) |
boolean |
PartitionPredicate.DefaultPartitionPredicate.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts) |
boolean |
PartitionPredicate.MultiplePartitionPredicate.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts) |
Modifier and Type | Method and Description |
---|---|
boolean |
CompoundPredicate.test(InternalRow row) |
boolean |
Predicate.test(InternalRow row)
Test based on the specific input row.
|
boolean |
LeafPredicate.test(InternalRow row) |
abstract boolean |
CompoundPredicate.Function.test(InternalRow row,
List<Predicate> children) |
boolean |
And.test(InternalRow row,
List<Predicate> children) |
boolean |
Or.test(InternalRow row,
List<Predicate> children) |
boolean |
CompoundPredicate.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts) |
boolean |
Predicate.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts)
Test based on the statistical information to determine whether a hit is possible.
|
boolean |
LeafPredicate.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts) |
abstract boolean |
CompoundPredicate.Function.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts,
List<Predicate> children) |
boolean |
And.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts,
List<Predicate> children) |
boolean |
Or.test(long rowCount,
InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts,
List<Predicate> children) |
Modifier and Type | Method and Description |
---|---|
boolean |
SortBuffer.write(InternalRow record) |
boolean |
BinaryExternalSortBuffer.write(InternalRow record) |
boolean |
BinaryInMemorySortBuffer.write(InternalRow record)
Writes a given record to this sort buffer.
|
protected void |
BinaryIndexedSortable.writeIndexAndNormalizedKey(InternalRow record,
long currOffset)
Write of index and normalizedKey.
|
Modifier and Type | Method and Description |
---|---|
static BinaryInMemorySortBuffer |
BinaryInMemorySortBuffer.createBuffer(NormalizedKeyComputer normalizedKeyComputer,
AbstractRowDataSerializer<InternalRow> serializer,
RecordComparator comparator,
MemorySegmentPool memoryPool)
Create a memory sorter in `insert` way.
|
Modifier and Type | Method and Description |
---|---|
Long |
HilbertIndexer.RowProcessor.hilbertValue(InternalRow o) |
byte[] |
HilbertIndexer.index(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
byte[] |
ZIndexer.index(InternalRow row) |
byte[] |
ZIndexer.RowProcessor.zvalue(InternalRow o) |
Modifier and Type | Class and Description |
---|---|
class |
SparkRow
A
InternalRow wraps spark Row . |
Modifier and Type | Method and Description |
---|---|
InternalRow |
SparkRow.getRow(int i,
int i1) |
Modifier and Type | Method and Description |
---|---|
static org.apache.spark.sql.catalyst.InternalRow |
SparkInternalRow.fromPaimon(InternalRow row,
RowType rowType) |
SparkInternalRow |
SparkInternalRow.replace(InternalRow row) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
SimpleStatsEvolution.Result.maxValues() |
InternalRow |
SimpleStatsEvolution.Result.minValues() |
InternalRow |
SimpleStats.toRow() |
Modifier and Type | Method and Description |
---|---|
static SimpleStats |
SimpleStats.fromRow(InternalRow row) |
Constructor and Description |
---|
Result(InternalRow minValues,
InternalRow maxValues,
InternalArray nullCounts) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
LocalTableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key)
TODO remove synchronized and supports multiple thread to lookup.
|
InternalRow |
TableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
LocalTableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key)
TODO remove synchronized and supports multiple thread to lookup.
|
InternalRow |
TableQuery.lookup(BinaryRow partition,
int bucket,
InternalRow key) |
Modifier and Type | Method and Description |
---|---|
LocalTableQuery |
LocalTableQuery.withCacheRowFilter(Filter<InternalRow> cacheRowFilter) |
Modifier and Type | Field and Description |
---|---|
protected InternalRow |
RowKeyExtractor.record |
Modifier and Type | Method and Description |
---|---|
InternalRow |
SinkRecord.row() |
Modifier and Type | Method and Description |
---|---|
RowKind |
RowKindGenerator.generate(InternalRow row) |
int |
TableWrite.getBucket(InternalRow row)
Calculate which bucket
row belongs to. |
int |
TableWriteImpl.getBucket(InternalRow row) |
BinaryRow |
TableWrite.getPartition(InternalRow row)
Calculate which partition
row belongs to. |
BinaryRow |
TableWriteImpl.getPartition(InternalRow row) |
static RowKind |
RowKindGenerator.getRowKind(RowKindGenerator rowKindGenerator,
InternalRow row) |
BinaryRow |
RowPartitionKeyExtractor.partition(InternalRow record) |
int |
FixedBucketWriteSelector.select(InternalRow row,
int numWriters) |
int |
WriteSelector.select(InternalRow row,
int numWriters)
Returns the logical writer index, to which the given record should be written.
|
void |
FixedBucketRowKeyExtractor.setRecord(InternalRow record) |
void |
RowKeyExtractor.setRecord(InternalRow record) |
BinaryRow |
RowPartitionKeyExtractor.trimmedPrimaryKey(InternalRow record) |
void |
TableWrite.write(InternalRow row)
Write a row to the writer.
|
void |
TableWriteImpl.write(InternalRow row) |
void |
TableWrite.write(InternalRow row,
int bucket)
Write a row with bucket.
|
void |
TableWriteImpl.write(InternalRow row,
int bucket) |
SinkRecord |
TableWriteImpl.writeAndReturn(InternalRow row) |
SinkRecord |
TableWriteImpl.writeAndReturn(InternalRow row,
int bucket) |
Constructor and Description |
---|
SinkRecord(BinaryRow partition,
int bucket,
BinaryRow primaryKey,
InternalRow row) |
Constructor and Description |
---|
TableWriteImpl(RowType rowType,
FileStoreWrite<T> write,
KeyAndBucketExtractor<InternalRow> keyAndBucketExtractor,
TableWriteImpl.RecordExtractor<T> recordExtractor,
RowKindGenerator rowKindGenerator,
boolean ignoreDelete) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
ValueContentRowDataRecordIterator.next() |
Modifier and Type | Method and Description |
---|---|
default RecordReader<InternalRow> |
TableRead.createReader(List<Split> splits) |
RecordReader<InternalRow> |
TableRead.createReader(Split split) |
RecordReader<InternalRow> |
AbstractDataTableRead.createReader(Split split) |
default RecordReader<InternalRow> |
TableRead.createReader(TableScan.Plan plan) |
RecordReader<InternalRow> |
KeyValueTableRead.reader(Split split) |
abstract RecordReader<InternalRow> |
AbstractDataTableRead.reader(Split split) |
static RecordReader<InternalRow> |
KeyValueTableRead.unwrap(RecordReader<KeyValue> reader) |
Constructor and Description |
---|
MergeTreeSplitGenerator(Comparator<InternalRow> keyComparator,
long targetSplitSize,
long openFileCost,
boolean deletionVectorsEnabled,
CoreOptions.MergeEngine mergeEngine) |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
IncrementalDiffSplitRead.createReader(DataSplit split) |
SplitRead<InternalRow> |
IncrementalDiffSplitRead.forceKeepDelete() |
SplitRead<InternalRow> |
SplitReadProvider.getOrCreate() |
SplitRead<InternalRow> |
RawFileSplitReadProvider.getOrCreate() |
SplitRead<InternalRow> |
IncrementalDiffReadProvider.getOrCreate() |
SplitRead<InternalRow> |
MergeFileSplitReadProvider.getOrCreate() |
SplitRead<InternalRow> |
IncrementalChangelogReadProvider.getOrCreate() |
SplitRead<InternalRow> |
IncrementalDiffSplitRead.withFilter(Predicate predicate) |
SplitRead<InternalRow> |
IncrementalDiffSplitRead.withIOManager(IOManager ioManager) |
SplitRead<InternalRow> |
IncrementalDiffSplitRead.withReadType(RowType readType) |
Constructor and Description |
---|
IncrementalChangelogReadProvider(java.util.function.Supplier<MergeFileSplitRead> supplier,
java.util.function.Consumer<SplitRead<InternalRow>> valuesAssigner) |
IncrementalDiffReadProvider(java.util.function.Supplier<MergeFileSplitRead> supplier,
java.util.function.Consumer<SplitRead<InternalRow>> valuesAssigner) |
MergeFileSplitReadProvider(java.util.function.Supplier<MergeFileSplitRead> supplier,
java.util.function.Consumer<SplitRead<InternalRow>> valuesAssigner) |
RawFileSplitReadProvider(java.util.function.Supplier<RawFileSplitRead> supplier,
java.util.function.Consumer<SplitRead<InternalRow>> valuesAssigner) |
Modifier and Type | Method and Description |
---|---|
static InternalRow |
FileMonitorTable.toRow(FileMonitorTable.FileChange change) |
Modifier and Type | Method and Description |
---|---|
RecordReader<InternalRow> |
TableLineageTable.TableLineageRead.createReader(Split split) |
protected static Iterator<InternalRow> |
AllTableOptionsTable.toRow(Map<String,Map<String,Map<String,String>>> option) |
Modifier and Type | Method and Description |
---|---|
static FileMonitorTable.FileChange |
FileMonitorTable.toFileChange(InternalRow row) |
Modifier and Type | Class and Description |
---|---|
class |
KeyProjectedRow
A
InternalRow to project key fields with RowKind.INSERT . |
class |
OffsetRow
A
InternalRow to wrap row with offset. |
class |
PartialRow
A
InternalRow to wrap row with partial fields. |
class |
ProjectedRow
An implementation of
InternalRow which provides a projected view of the underlying InternalRow . |
Modifier and Type | Field and Description |
---|---|
protected InternalRow |
ProjectedRow.row |
Modifier and Type | Method and Description |
---|---|
InternalRow |
ProjectToRowFunction.apply(InternalRow input,
BinaryRow project) |
abstract InternalRow |
VersionedObjectSerializer.convertTo(T record) |
static InternalRow |
InternalRowUtils.copyInternalRow(InternalRow row,
RowType rowType) |
InternalRow |
PartialRow.getRow(int pos,
int numFields) |
InternalRow |
OffsetRow.getRow(int pos,
int numFields) |
InternalRow |
KeyProjectedRow.getRow(int pos,
int numFields) |
InternalRow |
ProjectedArray.getRow(int pos,
int numFields) |
InternalRow |
ProjectedRow.getRow(int pos,
int numFields) |
InternalRow |
RowIterator.next() |
InternalRow |
IteratorResultIterator.next() |
InternalRow |
IntObjectSerializer.toRow(Integer record) |
InternalRow |
KeyValueWithLevelNoReusingSerializer.toRow(KeyValue kv) |
abstract InternalRow |
ObjectSerializer.toRow(T record)
Convert a
T to InternalRow . |
InternalRow |
VersionedObjectSerializer.toRow(T record) |
Modifier and Type | Method and Description |
---|---|
static RecordReader<InternalRow> |
FileUtils.createFormatReader(FileIO fileIO,
FormatReaderFactory format,
Path file,
Long fileSize) |
Modifier and Type | Method and Description |
---|---|
InternalRow |
ProjectToRowFunction.apply(InternalRow input,
BinaryRow project) |
int |
UserDefinedSeqComparator.compare(InternalRow o1,
InternalRow o2) |
Object[] |
RowDataToObjectArrayConverter.convert(InternalRow rowData) |
abstract T |
VersionedObjectSerializer.convertFrom(int version,
InternalRow row) |
static InternalRow |
InternalRowUtils.copyInternalRow(InternalRow row,
RowType rowType) |
KeyValue |
KeyValueWithLevelNoReusingSerializer.fromRow(InternalRow row) |
abstract T |
ObjectSerializer.fromRow(InternalRow rowData)
Convert a
InternalRow to T . |
T |
VersionedObjectSerializer.fromRow(InternalRow row) |
Integer |
IntObjectSerializer.fromRow(InternalRow row) |
LinkedHashMap<String,String> |
InternalRowPartitionComputer.generatePartValues(InternalRow in) |
PartialRow |
PartialRow.replace(InternalRow row) |
OffsetRow |
OffsetRow.replace(InternalRow row) |
KeyProjectedRow |
KeyProjectedRow.replaceRow(InternalRow row) |
ProjectedRow |
ProjectedRow.replaceRow(InternalRow row)
Replaces the underlying
InternalRow backing this ProjectedRow . |
GenericRow |
RowDataToObjectArrayConverter.toGenericRow(InternalRow rowData) |
Modifier and Type | Method and Description |
---|---|
List<V> |
ObjectsCache.read(K key,
Long fileSize,
Filter<InternalRow> loadFilter,
Filter<InternalRow> readFilter) |
List<V> |
ObjectsCache.read(K key,
Long fileSize,
Filter<InternalRow> loadFilter,
Filter<InternalRow> readFilter) |
List<T> |
ObjectsFile.read(String fileName,
Long fileSize,
Filter<InternalRow> loadFilter,
Filter<InternalRow> readFilter) |
List<T> |
ObjectsFile.read(String fileName,
Long fileSize,
Filter<InternalRow> loadFilter,
Filter<InternalRow> readFilter) |
static <V> List<V> |
ObjectsFile.readFromIterator(CloseableIterator<InternalRow> inputIterator,
ObjectSerializer<V> serializer,
Filter<InternalRow> readFilter) |
static <V> List<V> |
ObjectsFile.readFromIterator(CloseableIterator<InternalRow> inputIterator,
ObjectSerializer<V> serializer,
Filter<InternalRow> readFilter) |
Constructor and Description |
---|
PartialRow(int arity,
InternalRow row) |
Constructor and Description |
---|
IteratorResultIterator(IteratorWithException<InternalRow,IOException> records,
Runnable recycler,
Path filePath,
long pos) |
ObjectsCache(SegmentsCache<K> cache,
ObjectSerializer<V> projectedSerializer,
RowType formatSchema,
FunctionWithIOException<K,Long> fileSizeFunction,
BiFunctionWithIOE<K,Long,CloseableIterator<InternalRow>> reader) |
Copyright © 2023–2024 The Apache Software Foundation. All rights reserved.