0

0

Archival and Analytics

php中文网

php中文网

发布时间:2016-06-01 13:14:02

|

1509人浏览过

|

来源于php中文网

原创

May 16, 2014

by severalnines

We won’t bore you with buzzwords like volume, velocity and variety. This post is for MySQL users who want to get their hands dirty with Hadoop, so roll up your sleeves and prepare for work. Why would you ever want to move MySQL data into Hadoop? One good reason is archival and analytics. You might not want to delete old data, but rather move it into Hadoop and make it available for further analysis at a later stage. 

In this post, we are going to deploy a Hadoop Cluster and export data in bulk from a Galera Cluster usingApache Sqoop. Sqoop is a well-proven approach for bulk data loading from a relational database into Hadoop File System. There is alsoHadoop Applieravailable fromMySQL labs, which works by retrieving INSERT queries from MySQL master binlog and writing them into a file in HDFS in real-time (yes, it applies INSERTs only).

We will useApache Ambarito deploy Hadoop (HDP 2.1) on three servers. We have a clustered Wordpress site running on Galera, and for the purpose of this blog, we will export some user data to Hadoop for archiving purposes. The database name is wordpress, we will use Sqoop to import the data to a Hive table running on HDFS. The following diagram illustrates our setup:

Archival and Analytics

The ClusterControl node has been installed with an HAproxy instance to load balance Galera connections and listen on port 33306.

Prerequisites

All hosts are running CentOS 6.5 with firewall and SElinux turned off. All servers’ time are using NTP server and synced with each other. Hostname must be FQDN or define your hosts across all nodes in /etc/hosts file. Each host has been configured with the following host definitions:

192.168.0.100		clustercontrol haproxy mysql192.168.0.101		mysql1 galera1192.168.0.102		mysql2 galera2192.168.0.103		mysql3 galera3192.168.0.111		hadoop1 hadoop1.cluster.com192.168.0.112		hadoop2 hadoop2.cluster.com192.168.0.113		hadoop3 hadoop3.cluster.com

Create an SSH key and configure passwordless SSH on hadoop1 to other Hadoop nodes to automate the deployment by Ambari Server. In hadoop1, run following commands as root:

$ ssh-keygen -t rsa # press Enter for all prompts$ ssh-copy-id -i ~/.ssh/id_rsa hadoop1.cluster.com$ ssh-copy-id -i ~/.ssh/id_rsa hadoop2.cluster.com$ ssh-copy-id -i ~/.ssh/id_rsa hadoop3.cluster.com

On all Hadoop hosts, install and configure NTP:

$ yum install ntp -y$ chkconfig ntp on$ service ntpd on$ ntpdate -u se.pool.ntp.org

Deploying Hadoop

1. Install Ambari Server on one of the Hadoop nodes (we chose hadoop1.cluster.com), this will help us deploy the Hadoop cluster. Configure Ambari repository for CentOS 6 and start the installation:

$ cd /etc/yum.repos.d$ wget http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.5.1/ambari.repo$ yum -y install ambari-server

2. Setup and start ambari-server:

$ ambari-server setup # accept all default values on prompt$ ambari-server start

Give Ambari a few minutes to bootstrap before accessing the web interface at port 8080.

3. Open a web browser and navigate to http://hadoop1.cluster.com:8080. Login with username and password ‘admin’. This is the Ambari dashboard, it will guide us through the deployment. Assign a cluster name and clickNext.

4. At theSelect Stackstep, choose HDP2.1:

Archival and Analytics

5. Specify all Hadoop hosts in theTarget Hostsfields. Upload the SSH key that we have generated in the Prerequisites section during passwordless SSH setup and clickRegister and Confirm:

Archival and Analytics

6. This page will confirm that Ambari has located the correct hosts for your Hadoop cluster. Ambri will check those hosts to make sure they have the correct directories, packages, and processes to continue the install. ClickNextto proceed.

7. If you have enough resources, just go ahead and install all services:

Archival and Analytics

8. InAssign Masterpage, we let Ambari choose the configuration for us before clickingNext:

Color Wheel
Color Wheel

AI灰度logo或插画上色工具

下载

Archival and Analytics

9. InAssign Slaves and Clients page, we’ll enable all clients and slaves on each of our Hadoop hosts:

Archival and Analytics

10. Hive, Oozie and Nagios might requires further input like database password and administrator email. Specify the needed information accordingly and clickNext

11. You will be able to review your configuration selection before clickingDeploy to start the deployment:

Archival and Analytics

WhenSuccessfully installed and started the servicesappears, chooseNext. On the summary page, chooseComplete. Hadoop installation and deployment is now complete. Verify that all services are running correctly:

Archival and Analytics

We can now proceed to import some data from our Galera cluster as described in the next section.

Importing MySQL Data using Sqoop to Hive

Before importing any MySQL data, we need to create a target table in Hive. This table will have a similar definition as the source table in MySQL as we are importing all columns at the same time. Here is MySQL’s CREATE TABLE statement:

CREATE TABLE `wp_users` (`ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,`user_login` varchar(60) NOT NULL DEFAULT '',`user_pass` varchar(64) NOT NULL DEFAULT '',`user_nicename` varchar(50) NOT NULL DEFAULT '',`user_email` varchar(100) NOT NULL DEFAULT '',`user_url` varchar(100) NOT NULL DEFAULT '',`user_registered` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`user_activation_key` varchar(60) NOT NULL DEFAULT '',`user_status` int(11) NOT NULL DEFAULT '0',`display_name` varchar(250) NOT NULL DEFAULT '',PRIMARY KEY (`ID`),KEY `user_login_key` (`user_login`),KEY `user_nicename` (`user_nicename`)) ENGINE=InnoDB AUTO_INCREMENT=5864 DEFAULT CHARSET=utf8

SSH into any Hadoop node (since we installed Hadoop clients on all nodes) and switch to hdfs user:

$ su - hdfs

Enter into Hive console:

$ hive

Create a Hive database and table, similar to our MySQL table (Hive does not support DATETIME data type, so we are going to replace it with TIMESTAMP):

hive> CREATE SCHEMA wordpress;hive> SHOW DATABASES;OKdefaultwordpresshive> USE wordpress;hive> CREATE EXTERNAL TABLE IF NOT EXISTS users (ID BIGINT,user_login VARCHAR(60),user_pass VARCHAR(64),user_nicename VARCHAR(50),user_email VARCHAR(100),user_url VARCHAR(100),user_registered TIMESTAMP,user_activation_key VARCHAR(60),user_status INT,display_name VARCHAR(250));hive> exit;

Now we can start to import the wp_users MySQL table into Hive’s users table, connecting to MySQL nodes through HAproxy (port 33306):

$ sqoop import /--connect jdbc:mysql://192.168.0.100:33306/wordpress /--username=wordpress /--password=password /--table=wp_users /--hive-import /--hive-table=wordpress.users /--target-dir=wp_users_import /--direct

We can track the import progress from the Sqoop output :

..INFO mapreduce.ImportJobBase: Beginning import of wp_users..INFO mapreduce.Job: Job job_1400142750135_0020 completed successfully..OKTime taken: 10.035 secondsLoading data to table wordpress.usersTable wordpress.users stats: [numFiles=5, numRows=0, totalSize=240814, rawDataSize=0]OKTime taken: 3.666 seconds

You should see that an HDFS directory wp_users_import has been created (as specified in --target-dir in the Sqoop command) and we can browse its files using the following commands:

$ hdfs dfs -ls$ hdfs dfs -ls wp_users_import$ hdfs dfs -cat wp_users_import/part-m-00000 | more

Now let’s check our imported data inside Hive:

$ hive -e 'SELECT * FROM wordpress.users LIMIT 10'Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.propertiesOK2	admin	$P$BzaV8cFzeGpBODLqCmWp3uOtc5dVRb.	admin	my@email.com		2014-05-15 12:53:12		0	admin5	SteveJones	$P$BciftXXIPbAhaWuO4bFb4LVUN24qay0	SteveJones	demouser2@54.254.93.50		2014-05-15 12:57:59		0	Steve8	JanetGarrett	$P$BEp8IY1zvvrIdtPzDiU9D/br.FtzFa1	JanetGarrett	demouser3@54.254.93.50		2014-05-15 12:57:59		0	Janet11	AnnWalker	$P$B1wix5Xn/15o06BWyHa.r/cZ0rwUWQ/	AnnWalker	demouser4@54.254.93.50		2014-05-15 12:57:59		0	Ann14	DeborahFields	$P$B5PouJkJdfAucdz9p8NaKtS9WoKJu01	DeborahFields	demouser5@54.254.93.50		2014-05-15 12:57:59		0	Deborah17	ChristopherMitchell	$P$Bi/VWI1W4iP7h9mC0SXd4f.kKWnilH/	ChristopherMitchell	demouser6@54.254.93.50		2014-05-15 12:57:59		0	Christopher20	HenryHolmes	$P$BrPHv/ZHb7IBYzFpKgauBl/2WPZAC81	HenryHolmes	demouser7@54.254.93.50		2014-05-15 12:58:00		0	Henry23	DavidWard	$P$BVYg0SFTihdXwDhushveet4n2Eitxp1	DavidWard	demouser8@54.254.93.50		2014-05-15 12:58:00		0	David26	WilliamMurray	$P$Bc8FmkMadsQZCsW4L5Vo8Xax2ex8we.	WilliamMurray	demouser9@54.254.93.50		2014-05-15 12:58:00		0	William29	KellyHarris	$P$Bc85yvlxvWQ4XxkeAgJRugOqm6S6au.	KellyHarris	demouser10@54.254.93.50		2014-05-15 12:58:00		0	KellyTime taken: 16.282 seconds, Fetched: 10 row(s)

Nice! Now we can see that our data exists both in Galera and Hadoop. You can also use --query option in Sqoop to filter the data that you want to export to Hadoop using an SQL query. This is a basic example of how we can start to leverage Hadoop for archival and analytics. Welcome to big data!

References

  • Sqoop User Guide (v1.4.2) http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html
  • Hortonworks Data Platform Documentation http://docs.hortonworks.com/HDPDocuments/Ambari-1.5.1.0/bk_using_Ambari_book/content/index.html

本站声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn

热门AI工具

更多
DeepSeek
DeepSeek

幻方量化公司旗下的开源大模型平台

豆包大模型
豆包大模型

字节跳动自主研发的一系列大型语言模型

通义千问
通义千问

阿里巴巴推出的全能AI助手

腾讯元宝
腾讯元宝

腾讯混元平台推出的AI助手

文心一言
文心一言

文心一言是百度开发的AI聊天机器人,通过对话可以生成各种形式的内容。

讯飞写作
讯飞写作

基于讯飞星火大模型的AI写作工具,可以快速生成新闻稿件、品宣文案、工作总结、心得体会等各种文文稿

即梦AI
即梦AI

一站式AI创作平台,免费AI图片和视频生成。

ChatGPT
ChatGPT

最最强大的AI聊天机器人程序,ChatGPT不单是聊天机器人,还能进行撰写邮件、视频脚本、文案、翻译、代码等任务。

相关专题

更多
全国统一发票查询平台入口合集
全国统一发票查询平台入口合集

本专题整合了全国统一发票查询入口地址合集,阅读专题下面的文章了解更多详细入口。

19

2026.02.03

短剧入口地址汇总
短剧入口地址汇总

本专题整合了短剧app推荐平台,阅读专题下面的文章了解更多详细入口。

27

2026.02.03

植物大战僵尸版本入口地址汇总
植物大战僵尸版本入口地址汇总

本专题整合了植物大战僵尸版本入口地址汇总,前往文章中寻找想要的答案。

15

2026.02.03

c语言中/相关合集
c语言中/相关合集

本专题整合了c语言中/的用法、含义解释。阅读专题下面的文章了解更多详细内容。

3

2026.02.03

漫蛙漫画网页版入口与正版在线阅读 漫蛙MANWA官网访问专题
漫蛙漫画网页版入口与正版在线阅读 漫蛙MANWA官网访问专题

本专题围绕漫蛙漫画(Manwa / Manwa2)官网网页版入口进行整理,涵盖漫蛙漫画官方主页访问方式、网页版在线阅读入口、台版正版漫画浏览说明及基础使用指引,帮助用户快速进入漫蛙漫画官网,稳定在线阅读正版漫画内容,避免误入非官方页面。

13

2026.02.03

Yandex官网入口与俄罗斯搜索引擎访问指南 Yandex中文登录与网页版入口
Yandex官网入口与俄罗斯搜索引擎访问指南 Yandex中文登录与网页版入口

本专题汇总了俄罗斯知名搜索引擎 Yandex 的官网入口、免登录访问地址、中文登录方法与网页版使用指南,帮助用户稳定访问 Yandex 官网,并提供一站式入口汇总。无论是登录入口还是在线搜索,用户都能快速获取最新稳定的访问链接与使用指南。

114

2026.02.03

Java 设计模式与重构实践
Java 设计模式与重构实践

本专题专注讲解 Java 中常用的设计模式,包括单例模式、工厂模式、观察者模式、策略模式等,并结合代码重构实践,帮助学习者掌握 如何运用设计模式优化代码结构,提高代码的可读性、可维护性和扩展性。通过具体示例,展示设计模式如何解决实际开发中的复杂问题。

3

2026.02.03

C# 并发与异步编程
C# 并发与异步编程

本专题系统讲解 C# 异步编程与并发控制,重点介绍 async 和 await 关键字、Task 类、线程池管理、并发数据结构、死锁与线程安全问题。通过多个实战项目,帮助学习者掌握 如何在 C# 中编写高效的异步代码,提升应用的并发性能与响应速度。

2

2026.02.03

Python 强化学习与深度Q网络(DQN)
Python 强化学习与深度Q网络(DQN)

本专题深入讲解 Python 在强化学习(Reinforcement Learning)中的应用,重点介绍 深度Q网络(DQN) 及其实现方法,涵盖 Q-learning 算法、深度学习与神经网络的结合、环境模拟与奖励机制设计、探索与利用的平衡等。通过构建一个简单的游戏AI,帮助学习者掌握 如何使用 Python 训练智能体在动态环境中作出决策。

3

2026.02.03

热门下载

更多
网站特效
/
网站源码
/
网站素材
/
前端模板

精品课程

更多
相关推荐
/
热门推荐
/
最新课程
关于我们 免责申明 举报中心 意见反馈 讲师合作 广告合作 最新更新
php中文网:公益在线php培训,帮助PHP学习者快速成长!
关注服务号 技术交流群
PHP中文网订阅号
每天精选资源文章推送

Copyright 2014-2026 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号