大型网站所使用的工具上课讲义

上传人:yulij****0329 文档编号:138996992 上传时间:2020-07-19 格式:PPT 页数:33 大小:1.60MB
返回 下载 相关 举报
大型网站所使用的工具上课讲义_第1页
第1页 / 共33页
大型网站所使用的工具上课讲义_第2页
第2页 / 共33页
大型网站所使用的工具上课讲义_第3页
第3页 / 共33页
大型网站所使用的工具上课讲义_第4页
第4页 / 共33页
大型网站所使用的工具上课讲义_第5页
第5页 / 共33页
点击查看更多>>
资源描述

《大型网站所使用的工具上课讲义》由会员分享,可在线阅读,更多相关《大型网站所使用的工具上课讲义(33页珍藏版)》请在金锄头文库上搜索。

1、大型網站所使用的工具,Perlbal - 多個網頁伺服器的負載平衡 MogileFS - 分散式檔案系統 有公司認為 MogileFS 比起 Hadoop 適合拿來處理小檔案 memcached - http:/memcached.org/ 共享記憶體? 把資料庫或其他需要經常讀取的部分,用記憶體快取(Cache)方式存放 Moxi - Memcache 的 PROXY More Resource: ,How to scale up web service in the past ?,Source: ,Source: ,Source: ,6,HBase Intro,王耀聰 陳威宇 jazznc

2、hc.org.tw wauenchc.org.tw,教育訓練課程,HBase is a distributed column-oriented database built on top of HDFS.,HBase is .,A distributed data store that can scale horizontally to 1,000s of commodity servers and petabytes of indexed storage. Designed to operate on top of the Hadoop distributed file system (HD

3、FS) or Kosmos File System (KFS, aka Cloudstore) for scalability, fault tolerance, and high availability. Integrated into the Hadoop map-reduce platform and paradigm.,Benefits,Distributed storage Table-like in data structure multi-dimensional map High scalability High availability High performance,Wh

4、o use HBase,Adobe 內部使用 (Structure data) Kalooga 圖片搜尋引擎 Meetup 社群聚會網站 Streamy 成功從 MySQL 移轉到 Hbase Trend Micro 雲端掃毒架構 Yahoo! 儲存文件 fingerprint 避免重複 More - http:/wiki.apache.org/hadoop/Hbase/PoweredBy,Backdrop,Started toward by Chad Walters and Jim 2006.11 Google releases paper on BigTable 2007.2 Initia

5、l HBase prototype created as Hadoop contrib. 2007.10 First useable HBase 2008.1 Hadoop become Apache top-level project and HBase becomes subproject 2008.10 HBase 0.18, 0.19 released,HBase Is Not ,Tables have one primary index, the row key. No join operators. Scans and queries can select a subset of

6、available columns, perhaps by using a wildcard. There are three types of lookups: Fast lookup using row key and optional timestamp. Full table scan Range scan from region start to end.,HBase Is Not (2),Limited atomicity and transaction support. HBase supports multiple batched mutations of single row

7、s only. Data is unstructured and untyped. No accessed or manipulated via SQL. Programmatic access via Java, REST, or Thrift APIs. Scripting via JRuby.,Why Bigtable?,Performance of RDBMS system is good for transaction processing but for very large scale analytic processing, the solutions are commerci

8、al, expensive, and specialized. Very large scale analytic processing Big queries typically range or table scans. Big databases (100s of TB),Why Bigtable? (2),Map reduce on Bigtable with optionally Cascading on top to support some relational algebras may be a cost effective solution. Sharding is not

9、a solution to scale open source RDBMS platforms Application specific Labor intensive (re)partitionaing,Why HBase ?,HBase is a Bigtable clone. It is open source It has a good community and promise for the future It is developed on top of and has good integration for the Hadoop platform, if you are us

10、ing Hadoop already. It has a Cascading connector.,HBase benefits than RDBMS,No real indexes Automatic partitioning Scale linearly and automatically with new nodes Commodity hardware Fault tolerance Batch processing,Data Model,Tables are sorted by Row Table schema only define its column families . Ea

11、ch family consists of any number of columns Each column consists of any number of versions Columns only exist when inserted, NULLs are free. Columns within a family are sorted and stored together Everything except table names are byte (Row, Family: Column, Timestamp) Value,Row key,Column Family,valu

12、e,TimeStamp,Members,Master Responsible for monitoring region servers Load balancing for regions Redirect client to correct region servers The current SPOF regionserver slaves Serving requests(Write/Read/Scan) of Client Send HeartBeat to Master Throughput and Region numbers are scalable by region ser

13、vers,Regions,表格是由一或多個 region 所構成 Region 是由其 startKey 與 endKey 所指定 每個 region 可能會存在於多個不同節點上,而且是由數個HDFS 檔案與區塊所構成,這類 region 是由 Hadoop 負責複製,實際個案討論 部落格,邏輯資料模型 一篇 Blog entry 由 title, date, author, type, text 欄位所組成。 一位User由 username, password等欄位所組成。 每一篇的 Blog entry可有許多Comments。 每一則comment由 title, author, 與

14、text 組成。 ERD,部落格 HBase Table Schema,Row key type (以2個字元的縮寫代表)與 timestamp組合而成。 因此 rows 會先後依 type 及 timestamp 排序好。方便用 scan () 來存取 Table的資料。 BLOGENTRY 與 COMMENT的”一對多”關係由comment_title, comment_author, comment_text 等column families 內的動態數量的column來表示 每個Column的名稱是由每則 comment的 timestamp來表示,因此每個column family的

15、 column 會依時間自動排序好,Architecture,ZooKeeper,HBase depends on ZooKeeper (Chapter 13) and by default it manages a ZooKeeper instance as the authority on cluster state,Operation,The -ROOT- table holds the list of .META. table regions,The .META. table holds the list of all user-space regions.,Installation

16、(1),$ wget sudo tar -zxvf hbase-*.tar.gz -C /opt/$ sudo ln -sf /opt/hbase-0.20.3 /opt/hbase$ sudo chown -R $USER:$USER /opt/hbase $ sudo mkdir /var/hadoop/ $ sudo chmod 777 /var/hadoop,啟動Hadoop,Setup (1),$ vim /opt/hbase/conf/hbase-env.sh export JAVA_HOME=/usr/lib/jvm/java-6-sunexport HADOOP_CONF_DIR=/opt/hadoop/confexport HBASE_HOME=/opt/hbaseexport HBASE_LOG_DIR=/var/hadoop/hbase-logsexport HBASE_PID_DIR=/var/hadoop/hbase-pidsexport HBASE_MANAGES_ZK=trueexport HBASE_CLASSPATH=$HBASE_CLA

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 中学教育 > 教学课件 > 高中课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号