public class ParquetWriter<T> extends Object implements Closeable
| Modifier and Type | Field and Description |
|---|---|
static int |
DEFAULT_BLOCK_SIZE |
static int |
DEFAULT_PAGE_SIZE |
| Constructor and Description |
|---|
ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport)
Create a new ParquetWriter.
|
ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize)
Create a new ParquetWriter.
|
ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary,
boolean validating)
Create a new ParquetWriter.
|
ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
int dictionaryPageSize,
boolean enableDictionary,
boolean validating)
Create a new ParquetWriter.
|
public static final int DEFAULT_BLOCK_SIZE
public static final int DEFAULT_PAGE_SIZE
public ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize)
throws IOException
file - the file to createwriteSupport - the implementation to write a record to a RecordConsumercompressionCodecName - the compression codec to useblockSize - the block size thresholdpageSize - the page size thresholdIOExceptionParquetWriter#ParquetWriter(Path, WriteSupport, CompressionCodecName, int, int, boolean)public ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary,
boolean validating)
throws IOException
file - the file to createwriteSupport - the implementation to write a record to a RecordConsumercompressionCodecName - the compression codec to useblockSize - the block size thresholdpageSize - the page size threshold (both data and dictionary)enableDictionary - to turn dictionary encoding onvalidating - to turn on validation using the schemaIOExceptionpublic ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
int dictionaryPageSize,
boolean enableDictionary,
boolean validating)
throws IOException
file - the file to createwriteSupport - the implementation to write a record to a RecordConsumercompressionCodecName - the compression codec to useblockSize - the block size thresholdpageSize - the page size thresholddictionaryPageSize - the page size threshold for the dictionary pagesenableDictionary - to turn dictionary encoding onvalidating - to turn on validation using the schemaIOExceptionpublic ParquetWriter(org.apache.hadoop.fs.Path file,
WriteSupport<T> writeSupport)
throws IOException
file - the file to createwriteSupport - the implementation to write a record to a RecordConsumerIOExceptionpublic void write(T object) throws IOException
IOExceptionpublic void close()
throws IOException
close in interface Closeableclose in interface AutoCloseableIOExceptionCopyright © 2013. All Rights Reserved.