public class ParquetInputSplit
extends org.apache.hadoop.mapreduce.InputSplit
implements org.apache.hadoop.io.Writable
| Constructor and Description |
|---|
ParquetInputSplit()
Writables must have a parameterless constructor
|
ParquetInputSplit(org.apache.hadoop.fs.Path path,
long start,
long length,
String[] hosts,
List<BlockMetaData> blocks,
String requestedSchema,
String fileSchema,
Map<String,String> extraMetadata,
Map<String,String> readSupportMetadata)
|
| Modifier and Type | Method and Description |
|---|---|
List<BlockMetaData> |
getBlocks() |
Map<String,String> |
getExtraMetadata() |
String |
getFileSchema() |
long |
getLength() |
String[] |
getLocations() |
org.apache.hadoop.fs.Path |
getPath() |
Map<String,String> |
getReadSupportMetadata() |
String |
getRequestedSchema() |
long |
getStart() |
void |
readFields(DataInput in) |
String |
toString() |
void |
write(DataOutput out) |
public ParquetInputSplit()
public ParquetInputSplit(org.apache.hadoop.fs.Path path,
long start,
long length,
String[] hosts,
List<BlockMetaData> blocks,
String requestedSchema,
String fileSchema,
Map<String,String> extraMetadata,
Map<String,String> readSupportMetadata)
path - the path to the filestart - the offset of the block in the filelength - the size of the block in the filehosts - the hosts where this block can be foundblocks - the block meta data (Columns locations)schema - the file schemareadSupportClass - the class used to materialize recordsrequestedSchema - the requested schema for materializationfileSchema - the schema of the fileextraMetadata - the app specific meta data in the filereadSupportMetadata - the read support specific metadatapublic List<BlockMetaData> getBlocks()
public long getLength()
throws IOException,
InterruptedException
getLength in class org.apache.hadoop.mapreduce.InputSplitIOExceptionInterruptedExceptionpublic String[] getLocations() throws IOException, InterruptedException
getLocations in class org.apache.hadoop.mapreduce.InputSplitIOExceptionInterruptedExceptionpublic long getStart()
public org.apache.hadoop.fs.Path getPath()
public String getRequestedSchema()
public String getFileSchema()
public Map<String,String> getExtraMetadata()
public Map<String,String> getReadSupportMetadata()
public void readFields(DataInput in) throws IOException
readFields in interface org.apache.hadoop.io.WritableIOExceptionpublic void write(DataOutput out) throws IOException
write in interface org.apache.hadoop.io.WritableIOExceptionCopyright © 2013. All Rights Reserved.