中文课件08基于消息传递的程序设计技术

上传人:w****i 文档编号:91941788 上传时间:2019-07-04 格式:PPT 页数:100 大小:1.86MB
返回 下载 相关 举报
中文课件08基于消息传递的程序设计技术_第1页
第1页 / 共100页
中文课件08基于消息传递的程序设计技术_第2页
第2页 / 共100页
中文课件08基于消息传递的程序设计技术_第3页
第3页 / 共100页
中文课件08基于消息传递的程序设计技术_第4页
第4页 / 共100页
中文课件08基于消息传递的程序设计技术_第5页
第5页 / 共100页
点击查看更多>>
资源描述

《中文课件08基于消息传递的程序设计技术》由会员分享,可在线阅读,更多相关《中文课件08基于消息传递的程序设计技术(100页珍藏版)》请在金锄头文库上搜索。

1、Parallel Programming,Instructor: Zhang Weizhe (张伟哲) Computer Network and Information Security Technique Research Center , School of Computer Science and Technology, Harbin Institute of Technology,Programming Using the Message-Passing Paradigm,3,Introduction Programing with MPI Programming with PVM C

2、omparision with MPI and PVM,Outline,4,A Parallel Machine Model,Interconnect,The cluster A node can communicate with other nodes by sending and receiving messages over an interconnection network可以通过在互连网络上发送和接收消息来与其他节点进行通信的节点,The von Neumann computer,5,Principles of Message-Passing Programming,Each pr

3、ocessor in a message passing program runs a separate process (sub-program, task) The logical view of a machine supporting the message-passing paradigm consists of p processes, each with its own exclusive address space. All variables are private Each data element must belong to one of the partitions

4、of the space; hence, data must be explicitly partitioned and placed. Communicate via special subroutine calls All interactions (read-only or read/write) require cooperation of two processes - the process that has the data and the process that wants to access the data.,6,Principles of Message-Passing

5、 Programming,消息传递程序中的每个处理器都运行一个单独的进程(子程序,任务) 支持消息传递范例的机器的逻辑视图由p个进程组成,每个进程都有自己的独占地址空间。 所有变量都是私有的 每个数据元素必须属于该空间的一个分区; 因此,必须明确分区和放置数据。 通过特殊的子程序通话 所有交互(只读或读/写)都需要两个进程的协作,这两个进程是具有数据访问数据的进程。,7,Principles of Message-Passing Programming,SPMD Single Program Multiple Data Same program runs everywhere Each pro

6、cess only knows and operates on a small part of data,MPMD Multiple Program Multiple Data Each process perform a different function (input, problem setup, solution, output, display),8,Messages,Messages are packets of data moving between processes The message passing system has to be told the followin

7、g information: Sending process Source location Data type Data length Receiving process(es) Destination location Destination size,9,Message Passing,Message-passing programs are often written using the asynchronous异步 or loosely synchronous松散同步 paradigms.,A synchronous communication does not complete u

8、ntil the message has been received.,An asynchronous communication completes as soon as the message is on its way,10,Introduction Programing with MPI Programming with PVM Comparision with MPI and PVM,Outline,11,What is MPI ?,The development of MPI started in April 1992. MPI was designed by the MPI Fo

9、rum (a diverse collection of implementors, library writers, and end users) quite independently of any specific implementation MPI由MPI论坛(实施者,图书馆作家和最终用户的多样化集合)设计,完全独立于任何具体的实现 Web site http:/www.mpi-forum.org/ http:/www-unix.mcs.anl.gov/mpi/,12,What is MPI ?,MPI defines a standard library for message-p

10、assing that can be used to develop portable message-passing programs using either C or Fortran. A fixed set of processes is created at program initialization, one process is created per processor mpirun np 5 program Each process knows its personal number (rank) Each process knows number of all proce

11、sses Each process can communicate with other processes Process cant create new processes (in MPI-1),13,What is MPI ?,MPI定义了消息传递的标准库,可用于使用C或Fortran开发便携式消息传递程序。 在程序初始化时创建一组固定的进程,每个处理器创建一个进程 每个进程知道其个人号码(等级) 每个进程知道所有进程的数量 每个进程都可以与其他进程进行通信 进程无法创建新进程(在MPI-1中),14,MPI: the Message Passing Interface,The mini

12、mal set of MPI routines.,15,Starting and Terminating the MPI Library,MPI_Init is called prior to any calls to other MPI routines. Its purpose is to initialize the MPI environment. MPI_Finalize is called at the end of the computation, and it performs various clean-up tasks to terminate the MPI enviro

13、nment. The prototypes of these two functions are: int MPI_Init(int *argc, char *argv) int MPI_Finalize() MPI_Init also strips off any MPI related command-line arguments. MPI_Init也剥离任何与MPI相关的命令行参数。 All MPI routines, data-types, and constants are prefixed by “MPI_”. The return code for successful comp

14、letion is MPI_SUCCESS.所有MPI例程,数据类型和常量都以“MPI_”作为前缀。 成功完成的返回码为MPI_SUCCESS。,16,Communicators,A communicator defines a communication domain - a set of processes that are allowed to communicate with each other. Information about communication domains is stored in variables of type MPI_Comm. Communicators

15、 are used as arguments to all message transfer MPI routines. A process can belong to many different (possibly overlapping) communication domains. MPI defines a default communicator called MPI_COMM_WORLD which includes all the processes.,17,Communicators,通信者定义通信域 - 允许彼此通信的一组进程。 有关通信域的信息存储在MPI_Comm类型的

16、变量中。 通信器用作所有消息传输MPI例程的参数。 进程可以属于许多不同(可能重叠)的通信域。 MPI定义了一个名为MPI_COMM_WORLD的默认通讯器,包括所有进程。,18,Querying Information,The MPI_Comm_size and MPI_Comm_rank functions are used to determine the number of processes and the label of the calling process, respectively. The calling sequences of these routines are as follows: int MPI_Comm_size(MPI_Comm comm, int *size) int MPI_Comm_rank(MPI_Comm comm, int *rank) The rank of a process is an integer that ranges from zero up to the si

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 高等教育 > 大学课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号