public class AvroParquetWriter<T extends org.apache.avro.generic.IndexedRecord> extends ParquetWriter<T>
DEFAULT_BLOCK_SIZE, DEFAULT_PAGE_SIZE| Constructor and Description |
|---|
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema)
Create a new
AvroParquetWriter. |
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize)
Create a new
AvroParquetWriter. |
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary)
Create a new
AvroParquetWriter. |
public AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize)
throws IOException
AvroParquetWriter.file - avroSchema - compressionCodecName - blockSize - pageSize - IOExceptionpublic AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary)
throws IOException
AvroParquetWriter.file - The file name to write to.avroSchema - The schema to write with.compressionCodecName - Compression code to use, or CompressionCodecName.UNCOMPRESSEDblockSize - HDFS block sizepageSize - See parquet write up. Blocks are subdivided into pages for alignment and other purposes.enableDictionary - Whether to use a dictionary to compress columns.IOExceptionpublic AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema)
throws IOException
AvroParquetWriter. The default block size is 50 MB.The default
page size is 1 MB. Default compression is no compression. (Inherited from ParquetWriter)file - The file name to write to.avroSchema - The schema to write with.IOExceptionCopyright © 2013. All Rights Reserved.