@InterfaceAudience.Private
public class ParquetInputSplit
extends org.apache.hadoop.mapreduce.lib.input.FileSplit
implements org.apache.hadoop.io.Writable
| Constructor and Description |
|---|
ParquetInputSplit()
Writables must have a parameterless constructor
|
ParquetInputSplit(org.apache.hadoop.fs.Path file,
long start,
long end,
long length,
String[] hosts,
long[] rowGroupOffsets,
String requestedSchema,
Map<String,String> readSupportMetadata) |
ParquetInputSplit(org.apache.hadoop.fs.Path path,
long start,
long length,
String[] hosts,
List<BlockMetaData> blocks,
String requestedSchema,
String fileSchema,
Map<String,String> extraMetadata,
Map<String,String> readSupportMetadata)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
void |
readFields(DataInput hin) |
String |
toString() |
void |
write(DataOutput hout) |
public ParquetInputSplit()
@Deprecated public ParquetInputSplit(org.apache.hadoop.fs.Path path, long start, long length, String[] hosts, List<BlockMetaData> blocks, String requestedSchema, String fileSchema, Map<String,String> extraMetadata, Map<String,String> readSupportMetadata)
ParquetInputSplit(Path, long, long, long, String[], long[], String, Map)path - start - length - hosts - blocks - requestedSchema - fileSchema - extraMetadata - readSupportMetadata - public ParquetInputSplit(org.apache.hadoop.fs.Path file,
long start,
long end,
long length,
String[] hosts,
long[] rowGroupOffsets,
String requestedSchema,
Map<String,String> readSupportMetadata)
file - the path of the file for that splitstart - the start offset in the fileend - the end offset in the filelength - the actual size in bytes that we expect to readhosts - the hosts with the replicas of this datarowGroupOffsets - the offsets of the rowgroups selected if loaded on the clientrequestedSchema - the user requested schemareadSupportMetadata - metadata from the read supportpublic String toString()
toString in class org.apache.hadoop.mapreduce.lib.input.FileSplitpublic final void readFields(DataInput hin) throws IOException
readFields in interface org.apache.hadoop.io.WritablereadFields in class org.apache.hadoop.mapreduce.lib.input.FileSplitIOExceptionpublic final void write(DataOutput hout) throws IOException
write in interface org.apache.hadoop.io.Writablewrite in class org.apache.hadoop.mapreduce.lib.input.FileSplitIOExceptionCopyright © 2014. All Rights Reserved.