google云计算系列课程第一讲:介绍课件

上传人:pu****.1 文档编号:571514246 上传时间:2024-08-11 格式:PPT 页数:37 大小:753KB
返回 下载 相关 举报
google云计算系列课程第一讲:介绍课件_第1页
第1页 / 共37页
google云计算系列课程第一讲:介绍课件_第2页
第2页 / 共37页
google云计算系列课程第一讲:介绍课件_第3页
第3页 / 共37页
google云计算系列课程第一讲:介绍课件_第4页
第4页 / 共37页
google云计算系列课程第一讲:介绍课件_第5页
第5页 / 共37页
点击查看更多>>
资源描述

《google云计算系列课程第一讲:介绍课件》由会员分享,可在线阅读,更多相关《google云计算系列课程第一讲:介绍课件(37页珍藏版)》请在金锄头文库上搜索。

1、Distributed Computing SeminarLecture 1: Introduction to Distributed Computing & Systems BackgroundChristophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007Except where otherwise noted, the contents of this presentation are Copyright 2007 University of Washington and are licensed und

2、er the Creative Commons Attribution 2.5 License.Course Overview5 lectures1 Introduction2 Technical Side: MapReduce & GFS2 Theoretical: Algorithms for distributed computingReadings + Questions nightlyOutlineIntroduction to Distributed ComputingParallel vs. Distributed ComputingHistory of Distributed

3、ComputingParallelization and SynchronizationNetworking BasicsComputer SpeedupMoores Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965)Image: Toms Hardware and not subject to the Creative Commons license applicable to the rest of this work. Image: Toms Hardw

4、areScope of problemsWhat can you do with 1 computer?What can you do with 100 computers?What can you do with an entire data center?Distributed problemsRendering multiple frames of high-quality animationImage: DreamWorks Animation and not subject to the Creative Commons license applicable to the rest

5、of this work.Distributed problemsSimulating several hundred or thousand characters Happy Feet Kingdom Feature Productions; Lord of the Rings New Line Cinema, neither image is subject to the Creative Commons license applicable to the rest of the work. Distributed problemsIndexing the web (Google)Simu

6、lating an Internet-sized network for networking experiments (PlanetLab)Speeding up content delivery (Akamai)What is the key attribute that all these examples have in common?Parallel vs. DistributedParallel computing can mean:Vector processing of dataMultiple CPUs in a single computerDistributed comp

7、uting is multiple CPUs across many computers over the networkA Brief History1975-85Parallel computing was favored in the early yearsPrimarily vector-based at firstGradually more thread-based parallelism was introducedImage: Computer Pictures Database and Cray Research Corp and is not subject to the

8、Creative Commons license applicable to the rest of this work.“Massively parallel architectures” start rising in prominenceMessage Passing Interface (MPI) and other libraries developedBandwidth was a big problemA Brief History1985-95A Brief History1995-TodayCluster/grid architecture increasingly domi

9、nantSpecial node machines eschewed in favor of COTS technologiesWeb-wide cluster softwareCompanies like Google take this to the extremeParallelization & SynchronizationParallelization IdeaParallelization is “easy” if processing can be cleanly split into n units:Parallelization Idea (2)In a parallel

10、computation, we would like to have as many threads as we have processors. e.g., a four-processor computer would be able to run four threads at the same time.Parallelization Idea (3)Parallelization Idea (4)Parallelization PitfallsBut this model is too simple! How do we assign work units to worker thr

11、eads?What if we have more work units than threads?How do we aggregate the results at the end?How do we know all the workers have finished?What if the work cannot be divided into completely separate tasks?What is the common theme of all of these problems?Parallelization Pitfalls (2)Each of these prob

12、lems represents a point at which multiple threads must communicate with one another, or access a shared resource.Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system!What is Wrong With This?Thread 1:void foo() x+; y = x;Thread 2:void bar() y+; x

13、+=3;If the initial state is y = 0, x = 6, what happens after these threads finish running?Multithreaded = UnpredictabilityWhen we run a multithreaded program, we dont know what order threads run in, nor do we know when they will interrupt one another.Thread 1:void foo() eax = memx; inc eax; memx = e

14、ax; ebx = memx; memy = ebx;Thread 2:void bar() eax = memy; inc eax; memy = eax; eax = memx; add eax, 3; memx = eax;Many things that look like “one step” operations actually take several steps under the hood:Multithreaded = UnpredictabilityThis applies to more than just integers:Pulling work units fr

15、om a queueReporting work back to master unitTelling another thread that it can begin the “next phase” of processing All require synchronization!Synchronization PrimitivesA synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically. Hardware support

16、 guarantees that operations on synchronization primitives only ever take one stepSemaphoresA semaphore is a flag that can be raised or lowered in one stepSemaphores were flags that railroad engineers would use when entering a shared trackOnly one side of the semaphore can ever be red! (Can both be g

17、reen?)Semaphoresset() and reset() can be thought of as lock() and unlock()Calls to lock() when the semaphore is already locked cause the thread to block.Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctlyThe “corrected” exampleThread 1:void foo() sem.lock(); x+;

18、 y = x; sem.unlock();Thread 2:void bar() sem.lock(); y+; x+=3; sem.unlock();Global var “Semaphore sem = new Semaphore();” guards access to x & yCondition VariablesA condition variable notifies threads that a particular condition has been met Inform another thread that a queue now contains elements t

19、o pull from (or that its empty request more elements!)Pitfall: What if nobodys listening?The final exampleThread 1:void foo() sem.lock(); x+; y = x; fooDone = true; sem.unlock(); fooFinishedCV.notify();Thread 2:void bar() sem.lock(); if(!fooDone) fooFinishedCV.wait(sem); y+; x+=3; sem.unlock();Globa

20、l vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;Too Much Synchronization? DeadlockSynchronization becomes even more complicated when multiple locks can be usedCan cause entire system to “get stuck”Thread A:Thread A:semaphore1.lock();s

21、emaphore2.lock();/* use data guarded by semaphores */semaphore1.unlock(); semaphore2.unlock();Thread B:semaphore2.lock();semaphore1.lock();/* use data guarded by semaphores */semaphore1.unlock(); semaphore2.unlock();(Image: RPI CSCI.4210 Operating Systems notes)The Moral: Be Careful!Synchronization

22、is hardNeed to consider all possible shared stateMust keep locks organized and use them consistently and correctlyKnowing there are bugs may be tricky; fixing them can be even worse!Keeping shared state to a minimum reduces total system complexityFundamentals of NetworkingSockets: The Internet = tub

23、es?A socket is the basic network interfaceProvides a two-way “pipe” abstraction between two applicationsClient creates a socket, and connects to the server, who receives a socket representing the other sidePortsWithin an IP address, a port is a sub-address identifying a listening programAllows multi

24、ple clients to connect to a server at onceWhat makes this work?Underneath the socket layer are several more protocolsMost important are TCP and IP (which are used hand-in-hand so often, theyre often spoken of as one protocol: TCP/IP)Even more low-level protocols handle how data is sent over Ethernet

25、 wires, or how bits are sent through the air using 802.11 wirelessWhy is This Necessary?Not actually tube-like “underneath the hood”Unlike phone system (circuit switched), the packet switched Internet uses many routes at onceNetworking IssuesIf a party to a socket disconnects, how much data did they

26、 receive? Did they crash? Or did a machine in the middle?Can someone in the middle intercept/modify our data?Traffic congestion makes switch/router topology important for efficient throughputConclusionsProcessing more data means using more machines at the same timeCooperation between processes requires synchronizationDesigning real distributed systems requires consideration of networking topologyNext time: How MapReduce works

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 建筑/环境 > 施工组织

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号