您的位置 首页 大数据运维

大数据集群迁移

大数据集群迁移

迁移数据

准备两套集群,我这使用apache集群和CDH集群

启动集群

 

启动完毕后,将apache集群中,hive库里dwd,dws,ads三个库的数据迁移到CDH集群

在apache集群里hosts加上CDH Namenode对应域名并分发给各机器

 

因为集群都是HA模式,所以需要在apache集群上配置CDH集群,让distcp能识别出CDH的nameservice

[root@hadoop101 hadoop]# vim /opt/module/hadoop-3.1.3/etc/hadoop/hdfs-site.xml 
<!--配置nameservice-->
<property>
  <name>dfs.nameservices</name>
  <value>mycluster,nameservice1</value>
</property>

<!--指定本地服务-->
<property>
  <name>dfs.internal.nameservices</name>
  <value>mycluster</value>
</property>
<!--配置多NamenNode-->
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2,nn3</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>hadoop101:8020</value>
</property>
<property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>hadoop102:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn3</name>
  <value>hadoop103:8020</value>
</property>
<!--配置nameservice1的namenode服务-->
<property>
    <name>dfs.ha.namenodes.nameservice1</name>
    <value>namenode30,namenode37</value>
  </property>
 <property>
    <name>dfs.namenode.rpc-address.nameservice1.namenode30</name>
    <value>hadoop104:8020</value>
  </property>
<property>
    <name>dfs.namenode.rpc-address.nameservice1.namenode37</name>
    <value>hadoop106:8020</value>
  </property>
<property>
    <name>dfs.namenode.http-address.nameservice1.namenode30</name>
    <value>hadoop104:9870</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.nameservice1.namenode37</name>
    <value>hadoop106:9870</value>
  </property>
  <property>
    <name>dfs.client.failover.proxy.provider.nameservice1</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
<!--为NamneNode设置HTTP服务监听-->
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>hadoop101:9870</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>hadoop102:9870</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn3</name>
  <value>hadoop103:9870</value>
</property>
<!--配置HDFS客户端联系Active NameNode节点的Java类-->
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

修改CDH hosts

[root@hadoop101 ~]# vim /etc/hosts

 

进行分发,这里的hadoop104,hadoop105,hadoop106,分别对应apache的hadoop101,hadoop102,hadoop103

同样修改CDH集群配置,在所有hdfs-site.xml文件里修改配置

 

<property>
<name>dfs.nameservices</name>
<value>mycluster,nameservice1</value>
</property>
<property>
<name>dfs.internal.nameservices</name>
<value>nameservice1</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2,nn3</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop104:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop105:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn3</name>
<value>hadoop106:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop104:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop105:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn3</name>
<value>hadoop106:9870</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

最后注意:重点由于我的Apahce集群和CDH集群3台集群都是hadoop101,hadoop102,hadoop103,所以要关闭域名访问,使用ip访问

CDH把钩去了

apache设置为false

 

再使用hadoop distcp命令进行迁移,-Dmapred.job.queue.name指定队列,默认是default队列。上面配置集群都配了的话,那么在CDH和apache集群下都可以执行这个命令

[root@hadoop101 hadoop]# hadoop distcp -Dmapred.job.queue.name=hive  webhdfs://mycluster:9070/user/hive/warehouse/dwd.db/  hdfs://nameservice1/user/hive/warehouse

会启动一个mr任务,正在迁移

查看cdh 9870 http地址

 

数据已经成功迁移。数据迁移成功之后,接下来迁移hive表结构,编写shell脚本

[root@hadoop101 module]# vim exportHive.sh 
#!/bin/bash
hive -e "use dwd;show tables">tables.txt
cat tables.txt |while read eachline
do
hive -e "use dwd;show create table $eachline">>tablesDDL.txt
echo ";" >> tablesDDL.txt
done

执行脚本后将tablesDDL.txt文件分发到CDH集群下

[root@hadoop101 module]# scp tablesDDL.txt  hadoop104:/opt/module/

然后CDH下导入此表结构,先进到CDH(此处hadoop101)与上面104对应的hive里创建dwd库

[root@hadoop101 module]# hive
hive> create database dwd;

创建数据库后,tablesDDL.txt文本中,在最上方加上use dwd;

并且将createtab_stmt都替换成空格

[root@hadoop101 module]# sed -i s"#createtab_stmt# #g" tablesDDL.txt

最后执行hive -f命令将表结构导入

最后将表的分区重新刷新下,只有刷新分区才能把数据读出来,编写脚本

[root@hadoop101 module]# vim msckPartition.sh
#!/bin/bash
hive -e "use dwd;show tables">tables.txt
cat tables.txt |while read eachline
do
hive -e "use dwd;MSCK REPAIR TABLE $eachline"
done
[root@hadoop101 module]# chmod +777 msckPartition.sh 
[root@hadoop101 module]# ./msckPartition.sh 
刷完分区后,查询表数据

欢迎来撩 : 汇总all

白眉大叔

关于白眉大叔linux云计算: 白眉大叔

热门文章