前些天发现了一个巨牛的人工智能学习网站,通俗易懂,风趣幽默,忍不住给大家分享一下。点击跳转到网站:https://www.captainai.net/dongkelun
前言
记录利用Spark 创建Hive表的几种压缩格式。
背景
本人在测试hive表的parquet和orc文件对应的几种压缩算法性能对比。利用Spark thrift server通过sql语句创建表,对比 parquet对应的gzip、snappy,orc对应的 snappy、zlib的压缩率以及查询性能。
parquet
建表语句:在最后加1
STORED AS PARQUET
parquet默认的压缩为snappy,如果想改成其他压缩格式如gzip,可在建表语句最后加
1 | STORED AS PARQUET TBLPROPERTIES('parquet.compression'='GZIP') |
验证是否有效,查看hive表对应路径下的文件名
snappy的文件后缀为:.snappy.parquet
gzip的文件后缀为: .gz.parquet
还可通过修改spark参数1
--conf spark.sql.parquet.compression.codec=gzip
spark sql源码的定义:1
2
3
4
5
6
7
8
9
10
11val PARQUET_COMPRESSION = buildConf("spark.sql.parquet.compression.codec")
.doc("Sets the compression codec used when writing Parquet files. If either `compression` or " +
"`parquet.compression` is specified in the table-specific options/properties, the " +
"precedence would be `compression`, `parquet.compression`, " +
"`spark.sql.parquet.compression.codec`. Acceptable values include: none, uncompressed, " +
"snappy, gzip, lzo, brotli, lz4, zstd.")
.version("1.1.1")
.stringConf
.transform(_.toLowerCase(Locale.ROOT))
.checkValues(Set("none", "uncompressed", "snappy", "gzip", "lzo", "lz4", "brotli", "zstd"))
.createWithDefault("snappy")
可以看出parquet的默认压缩为snappy,可选压缩格式为:”none”, “uncompressed”, “snappy”, “gzip”, “lzo”, “lz4”, “brotli”, “zstd”
orc
建表语句:在最后加1
STORED AS ORC
ORC默认的压缩也是snappy,如果想改成其他压缩格式如zlib,可在建表语句最后加1
STORED AS ORC TBLPROPERTIES('orc.compress'='zlib')
spark 参数修改:1
--conf spark.sql.orc.compression.codec=zlib
spark sql源码的定义:1
2
3
4
5
6
7
8
9
10val ORC_COMPRESSION = buildConf("spark.sql.orc.compression.codec")
.doc("Sets the compression codec used when writing ORC files. If either `compression` or " +
"`orc.compress` is specified in the table-specific options/properties, the precedence " +
"would be `compression`, `orc.compress`, `spark.sql.orc.compression.codec`." +
"Acceptable values include: none, uncompressed, snappy, zlib, lzo.")
.version("2.3.0")
.stringConf
.transform(_.toLowerCase(Locale.ROOT))
.checkValues(Set("none", "uncompressed", "snappy", "zlib", "lzo"))
.createWithDefault("snappy")
可选压缩格式为:”none”, “uncompressed”, “snappy”, “zlib”, “lzo”
注意parquet的key为parquet.compression,orc的key为orc.compress不要弄错,一开始我写成了orc.compression结果不生效,以为不能用sql设置呢
从上面的源码里“the precedence would becompression
,orc.compress
,spark.sql.orc.compression.codec
.”可以看出parquet.compression或者orc.compress的优先级要比设置spark参数高,至于最高的compression怎么用我还不清楚
json
默认不压缩
可用压缩格式:none, bzip2, gzip, lz4,snappy ,deflate
text
默认不压缩
可用压缩格式:none, bzip2, gzip, lz4, snappy , deflate