HBase导出和导入表数据 (Export/Import)

本文假设我们要导出HBase里名为table1的表,其中包含名为cf1的列族,然后再将导出的文件导入到另一个环境里。

环境信息

HBase Zookeeper Hadoop 操作系统
2.5.1 集群版 3.4.8 3.2.3 CentOS 7.9

导出

1. 记录HBase表结构

HBase没有提供直接导出创建表的命令的方法,我们可以在HBase命令行里执行describe命令查看表名和列族名,然后整理得到建表命令,此命令在导入时需要用到:

> hbase shell
hbase:001:0> describe 'table1'
Table table1 is ENABLED
table1, {TABLE_ATTRIBUTES => {METADATA => {'hbase.store.file-tracker.impl' => 'DEFAULT'}}}
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf1', INDEX_BLOCK_ENCODING => 'NONE', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOM
FILTER => 'ROW', IN_MEMORY => 'false', COMPRESSION => 'SNAPPY', BLOCKCACHE => 'true', BLOCKSIZE => '65536 B (64KB)'}

整理以后可以得到完整的建表语句,如下所示:

create 'table1','cf1', 
    {VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', 
    DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', 
    REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', 
    COMPRESSION => 'SNAPPY', BLOCKCACHE => 'true'}

2. 导出HBase表数据

用HBase提供的Export命令可以导出表table1的数据,导出的内容在hdfs指定目录/foo/table1下(此目录不能已存在),文件名是part-m-00000_这样的格式:

> hbase org.apache.hadoop.hbase.mapreduce.Export -Dmapred.output.compress=true table1 /foo/table1

上面的命令会启动一个MapReduce作业,可以用-D指定一些MapReduce参数,例如结果文件是否压缩;也可以指定要导出的数据版本和时间戳范围,如果不指定则导出最新版本和全部时间范围。我发现官方文档对Export工具的介绍不够全面,很多-D参数都没有包含,如有需要可以看本文最后的附录部分。

导出作业执行时间的长短主要取决于数据量的大小,等作业执行完成后可以在hdfs上看到导出结果,是经过压缩的SequenceFile格式:

> hdfs dfs -ls /foo/table1
Found 3 items
-rw-r--r--   3 hdfs supergroup          0 2022-03-22 13:20 /foo/table1/_SUCCESS
-rw-r--r--   3 hdfs supergroup 1609084326 2022-03-22 13:20 /foo/table1/part-m-00000_
-rw-r--r--   3 hdfs supergroup 1151428929 2022-03-22 13:20 /foo/table1/part-m-00001_

有些文章提到可以用file:///local/path形式的路径导出到本地,实际测试发现导出的文件可能出现在集群里任意节点的本地磁盘,导入时也有类似问题,很可能会提示文件不存在。因此还是建议先导出到hdfs再下载。

要将文件从hdfs下载到本地磁盘,使用下面的命令:

> hdfs dfs -get /foo/table1/* /my/local/path

导入

1. 创建HBase表

导入数据前需要先手工创建表和列族,其中表名可以与导出时的表名不同,但列族名必须保持相同。建表时可以根据需要使用与原表不同的参数,例如预分区的配置,这里为了节约篇幅使用了默认的建表选项而非前面整理得到的完整建表语句:

> hbase shell
hbase:001:0> create 'table1', 'cf1'

列族里的列名不需要手工创建,下一步导入数据的时候会自动恢复。

2. 导入HBase表数据

如果数据文件在本地磁盘,需要先使用下面的命令上传到hdfs:

> hdfs dfs -put /my/local/path/* /bar/table1

将之前导出的数据恢复到表table1,此命令也是通过启动一个MapReduce作业实现的:

> hbase org.apache.hadoop.hbase.mapreduce.Import table1 /bar/table1/*

导入任务完成后,检查一下前3条数据是否正常:

> hbase shell
hbase:001:0> scan 'table1', {LIMIT=>3}

常见问题

由于要启动MapReduce作业完成导入导出,在执行HBase导入导出命令的时候可能会提示配置缺少几个必要的配置项。按照提示在mapred-site.xml文件里添加下面的配置即可(值按实际填写):

<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=...</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=...</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=...</value>
</property>

附录

HBase官方文档里没有给出Export和Import的全部参数,网上也少有文章介绍,实际上直接不带参数执行这两个命令就可以看到每个参数的用法。

Export命令参考

> hbase org.apache.hadoop.hbase.mapreduce.Export
ERROR: Wrong number of arguments: 0
Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]

  Note: -D properties will be applied to the conf used.
  For example:
   -D mapreduce.output.fileoutputformat.compress=true
   -D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
   -D mapreduce.output.fileoutputformat.compress.type=BLOCK
  Additionally, the following SCAN properties can be specified
  to control/limit what is exported..
   -D hbase.mapreduce.scan.column.family=<family1>,<family2>, ...
   -D hbase.mapreduce.include.deleted.rows=true
   -D hbase.mapreduce.scan.row.start=<ROWSTART>
   -D hbase.mapreduce.scan.row.stop=<ROWSTOP>
   -D hbase.client.scanner.caching=100
   -D hbase.export.visibility.labels=<labels>
For tables with very wide rows consider setting the batch size as below:
   -D hbase.export.scanner.batch=10
   -D hbase.export.scanner.caching=100
   -D mapreduce.job.name=jobName - use the specified mapreduce job name for the export
For MR performance consider the following properties:
   -D mapreduce.map.speculative=false
   -D mapreduce.reduce.speculative=false

Import命令使用参考

> hbase org.apache.hadoop.hbase.mapreduce.Import
ERROR: Wrong number of arguments: 0
Usage: Import [options] <tablename> <inputdir>
By default Import will load data directly into HBase. To instead generate
HFiles of data to prepare for a bulk data load, pass the option:
  -Dimport.bulk.output=/path/for/output
If there is a large result that includes too much Cell whitch can occur OOME caused by the memery sort in reducer, pass the option:
  -Dimport.bulk.hasLargeResult=true
 To apply a generic org.apache.hadoop.hbase.filter.Filter to the input, use
  -Dimport.filter.class=<name of filter class>
  -Dimport.filter.args=<comma separated list of args for filter
 NOTE: The filter will be applied BEFORE doing key renames via the HBASE_IMPORTER_RENAME_CFS property. Futher, filters will only use the Filter#filterRowKey(byte[] buffer, int offset, int length) method to identify  whether the current row needs to be ignored completely for processing and  Filter#filterCell(Cell) method to determine if the Cell should be added; Filter.ReturnCode#INCLUDE and #INCLUDE_AND_NEXT_COL will be considered as including the Cell.
To import data exported from HBase 0.94, use
  -Dhbase.import.version=0.94
  -D mapreduce.job.name=jobName - use the specified mapreduce job name for the import
For performance consider the following options:
  -Dmapreduce.map.speculative=false
  -Dmapreduce.reduce.speculative=false
  -Dimport.wal.durability=<Used while writing data to hbase. Allowed values are the supported durability values like SKIP_WAL/ASYNC_WAL/SYNC_WAL/...>

参考链接

https://hbase.apache.org/book.html#export
https://hbase.apache.org/book.html#import
https://stackoverflow.com/questions/34243134/what-is-sequence-file-in-hadoop

50 thoughts on “HBase导出和导入表数据 (Export/Import)

  1. Very well presented. Every quote was awesome and thanks for sharing the content. Keep sharing and keep motivating others.

  2. You’re so awesome! I don’t believe I have read a single thing like that before. So great to find someone with some original thoughts on this topic. Really.. thank you for starting this up. This website is something that is needed on the internet, someone with a little originality!

  3. I’m often to blogging and i really appreciate your content. The article has actually peaks my interest. I’m going to bookmark your web site and maintain checking for brand spanking new information.

发表评论

电子邮件地址不会被公开。 必填项已用*标注