lzo文件的交互map管理,文件压缩工具集

日期:2019-09-07编辑作者:系统操作

Zutils 是一组用来管理压缩文件的工具集,协助的压缩档包涵:gzip, bzip2, lzip, and xz. 当前版本提供的命令有:zcat, zcmp, zdiff, and zgrep.

Hadoop集群中启用了lzo后,还必要有的安顿,工夫使集群能够对单个的lzo文件实行交互的map操作,以提高job的实施进度。

Zutils is a collection of utilities able to deal with any combination of compressed and non-compressed files transparently. If any given file, including standard input, is compressed, its decompressed content is used. Compressed files are decompressed on the fly; no temporary files are created. These utilities are not wrapper scripts but safer and more efficient C++ programs. In particular the "--recursive" option is very efficient in those utilities supporting it.
The provided utilities are:
Zcat - Decompresses and copies files to standard output.
Zcmp - Decompresses and compares two files byte by byte.
Zdiff - Decompresses and compares two files line by line.
Zgrep - Decompresses and searches files for a regular expression.
Ztest - Tests integrity of compressed files.

先是,要为lzo文件成立index。上面包车型地铁吩咐对某些目录里的lzo文件成立index:

图片 1

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.LzoIndexer /log/source/cd/

应用该命令创设index要花些日子的,作者七个7.5GB大小的文本,创造index,花了2分30秒的标准。其实创制index时还会有其他一个参数,即com.hadoop.compression.lzo.DistributedLzoIndexer。四个选项能够参谋:,该小说对那五个选项的讲解,笔者不是很掌握,但运用后三个参数可以收缩创立index时所开支的命宫,而对mapreduce职责的实行没有影响。如下:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.DistributedLzoIndexer /log/source/cd/    

下一场,在Hive中创制表时,要内定INPUTFORMAT和OUTPUTFORMAT,不然集群如故无法对lzo举行互动的map管理。在hive中创设表时参加下列语句:

SET FILEFORMAT     
INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"  
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat";  

试行了这两步操作后,对hive实践进程的晋升依然很刚毅的。在测量检验中,我们应用三个7.5GB大小的lzo文件,实行多少复杂一点的Hive命令,使用上述配置后仅需34秒的时日,而原本要180秒。

README.md
Hadoop-LZO
Hadoop-LZO is a project to bring splittable LZO compression to Hadoop. LZO is an ideal compression format for Hadoop due to its combination of speed and compression size. However, LZO files are not natively splittable, meaning the parallelism that is the core of Hadoop is gone. This project re-enables that parallelism with LZO compressed files, and also comes with standard utilities (input/output streams, etc) for working with LZO files.

Origins
This project builds off the great work done at . As of issue 41, the differences in this codebase are the following.

it fixes a few bugs in hadoop-gpl-compression -- notably, it allows the decompressor to read small or uncompressable lzo files, and also fixes the compressor to follow the lzo standard when compressing small or uncompressible chunks. it also fixes a number of inconsistenly caught and thrown exception cases that can occur when the lzo writer gets killed mid-stream, plus some other smaller issues (see commit log).
it adds the ability to work with Hadoop streaming via the com.apache.hadoop.mapred.DeprecatedLzoTextInputFormat class
it adds an easier way to index lzo files (com.hadoop.compression.lzo.LzoIndexer)
it adds an even easier way to index lzo files, in a distributed manner (com.hadoop.compression.lzo.DistributedLzoIndexer)
Hadoop and LZO, Together at Last
LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable. Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. In addition to providing LZO decompression support, these classes provide an in-process indexer (com.hadoop.compression.lzo.LzoIndexer) and a map-reduce style indexer which will read a set of LZO files and output the offsets of LZO block boundaries that occur near the natural Hadoop block boundaries. This enables a large LZO file to be split into multiple mappers and processed in parallel. Because it is compressed, less data is read off disk, minimizing the number of IOPS required. And LZO decompression is so fast that the CPU stays ahead of the disk read, so there is no performance impact from having to decompress data as it's read off disk.

Building and Configuring
To get started, see . This project is built exactly the same way; please follow the answer to "How do I configure Hadoop to use these classes?" on that page.

You can read more about Hadoop, LZO, and how we're using it at Twitter at .

Once the libs are built and installed, you may want to add them to the class paths and library paths. That is, in hadoop-env.sh, set

    export HADOOP_CLASSPATH=/path/to/your/hadoop-lzo-lib.jar
    export JAVA_LIBRARY_PATH=/path/to/hadoop-lzo-native-libs:/path/to/standard-hadoop-native-libs
Note that there seems to be a bug in /path/to/hadoop/bin/hadoop; comment out the line

    JAVA_LIBRARY_PATH=''
because it keeps Hadoop from keeping the alteration you made to JAVA_LIBRARY_PATH above. (Update: see). Make sure you restart your jobtrackers and tasktrackers after uploading and changing configs so that they take effect.

Using Hadoop and LZO
Reading and Writing LZO Data
The project provides LzoInputStream and LzoOutputStream wrapping regular streams, to allow you to easily read and write compressed LZO data.

Indexing LZO Files
At this point, you should also be able to use the indexer to index lzo files in Hadoop (recall: this makes them splittable, so that they can be analyzed in parallel in a mapreduce job). Imagine that big_file.lzo is a 1 GB LZO file. You have two options:

index it in-process via:

hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.LzoIndexer big_file.lzo
index it in a map-reduce job via:

hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.DistributedLzoIndexer big_file.lzo
Either way, after 10-20 seconds there will be a file named big_file.lzo.index. The newly-created index file tells the LzoTextInputFormat's getSplits function how to break the LZO file into splits that can be decompressed and processed in parallel. Alternatively, if you specify a directory instead of a filename, both indexers will recursively walk the directory structure looking for .lzo files, indexing any that do not already have corresponding .lzo.index files.

Running MR Jobs over Indexed Files
Now run any job, say wordcount, over the new file. In Java-based M/R jobs, just replace any uses of TextInputFormat by LzoTextInputFormat. In streaming jobs, add "-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat" (streaming still uses the old APIs, and needs a class that inherits from org.apache.hadoop.mapred.InputFormat). For Pig jobs, email me or check the pig list -- I have custom LZO loader classes that work but are not (yet) contributed back.

Note that if you forget to index an .lzo file, the job will work but will process the entire file in a single split, which will be less efficient.

图片 2

本文由今晚最快开奖现场直播发布于系统操作,转载请注明出处:lzo文件的交互map管理,文件压缩工具集

关键词:

fedora 16 使用LiveCD修复grub2引导

笔者认可本人这厮太贪婪了,自从笔者的Computer装了fedora16改成双系统未来,竟然又对opensuse感兴趣了,结果扬眉吐气...

详细>>

如何合理设置Linux的swap分区,由匿名内存看swap分

看代码的时候,从能看到通过mmap映射一段匿名内存,但是为什么非要映射一段匿名内存呢,匿名内存到底是干啥用的...

详细>>

操作系统是什么,Server的snapapi折腾笔记

目标:我要在一个被修改过的linux(基于centos5.4修改的,重新编译过内核)系统,备份他的整个硬盘,该硬盘做了LV...

详细>>

allocation failed: out of vmalloc space – use vmalloc= to i

在行使oclhashcat+GPU校验密码强度时,会遭逢allocation failed: out ofvmalloc space –use vmalloc=size to increase size.那些主题材料。...

详细>>