ACM的论文写作格式标准.doc

上传人:飞****9 文档编号:136787634 上传时间:2020-07-02 格式:DOC 页数:5 大小:110KB
返回 下载 相关 举报
ACM的论文写作格式标准.doc_第1页
第1页 / 共5页
ACM的论文写作格式标准.doc_第2页
第2页 / 共5页
ACM的论文写作格式标准.doc_第3页
第3页 / 共5页
ACM的论文写作格式标准.doc_第4页
第4页 / 共5页
ACM的论文写作格式标准.doc_第5页
第5页 / 共5页
亲,该文档总共5页,全部预览完了,如果喜欢就下载吧!
资源描述

《ACM的论文写作格式标准.doc》由会员分享,可在线阅读,更多相关《ACM的论文写作格式标准.doc(5页珍藏版)》请在金锄头文库上搜索。

1、ACM Word Template for SIG Site1st Author1st authors affiliation1st line of address2nd line of addressTelephone number, incl. country code1st authors E-mail address2nd Author2nd authors affiliation1st line of address2nd line of addressTelephone number, incl. country code2nd E-mail3rd Author3rd author

2、s affiliation1st line of address2nd line of addressTelephone number, incl. country code3rd E-mailABSTRACT As network speed continues to grow, new challenges of network processing is emerging. In this paper we first studied the progress of network processing from a hardware perspective and showed tha

3、t I/O and memory systems become the main bottlenecks of performance promotion. Basing on the analysis, we get the conclusion that conventional solutions for reducing I/O and memory accessing latencies are insufficient for addressing the problems. Motivated by the studies, we proposed an improved DCA

4、 combined with INIC solution which has creations in optimized architectures, innovative I/O data transferring schemes and improved cache policies. Experimental results show that our solution reduces 52.3% and 14.3% cycles on average for receiving and transmitting respectively. Also I/O and memory tr

5、affics are significantly decreased. Moreover, an investigation to the behaviors of I/O and cache systems for network processing is performed. And some conclusions about the DCA method are also presented.KeywordsKeywords are your own designated keywords.1. INTRODUCTIONRecently, many researchers found

6、 that I/O system becomes the bottleneck of network performance promotion in modern computer systems 123. Aim to support computing intensive applications, conventional I/O system has obvious disadvantages for fast network processing in which bulk data transfer is performed. The lack of locality suppo

7、rt and high latency are the two main problems for conventional I/O system, which have been wildly discussed before 24. To overcome the limitations, an effective solution called Direct Cache Access (DCA) is suggested by INTEL 1. It delivers network packages from Network Interface Card (NIC) into cach

8、e instead of memory, to reduce the data accessing latency. Although the solution is promising, it is proved that DCA is insufficient to reduce the accessing latency and memory traffic due to many limitations 35. Another effective solution to solve the problem is Integrated Network Interface Card (IN

9、IC), which is used in many academic and industrial processor designs 67. INIC is introduced to reduce the heavy burden for I/O registers access in Network Drivers and interruption handling. But recent report 8 shows that the benefit of INIC is insignificant for the state of the art 10GbE network sys

10、tem.In this paper, we focus on the high efficient I/O system design for network processing in general-purpose-processor (GPP). Basing on the analysis of existing methods, we proposed an improved DCA combined with INIC solution to reduce the I/O related data transfer latency.The key contributions of

11、this paper are as follows: Review the network processing progress from a hardware perspective and point out that I/O and related last level memory systems have became the obstacle for performance promotion. Propose an improved DCA combined with INIC solution for I/O subsystem design to address the i

12、nefficient problem of a conventional I/O system. Give a framework of the improved I/O system architecture and evaluate the proposed solution with micro-benchmarks. Investigate I/O and Cache behaviors in the network processing progress basing on the proposed I/O system.The paper is organized as follo

13、ws. In Section 2, we present the background and motivation. In Section 3, we describe the improved DCA combined INIC solution and give a framework of the proposed I/O system implementation. In Section 4, firstly we give the experiment environment and methods, and then analyze the experiment results.

14、 In Section 5, we show some related works. Finally, in Section 6, we carefully discuss our solutions with many existing technologies, and then draw some conclusions.2. Background and MotivationIn this section, firstly we revise the progress of network processing and the main network performance impr

15、ovement bottlenecks nowadays. Then from the perspective of computer architecture, a deep analysis of network system is given. Also the motivation of this paper is presented.2.1 Network processing reviewFigure 1 illustrates the progress of network processing. Packages from physical line are sampled b

16、y Network Interface Card (NIC). NIC performs the address filtering and stream control operations, then send the frames to the socket buffer and notifies OS to invoke network stack processing by interruptions. When OS receives the interruptions, the network stack accesses the data in socket buffer and calculates the checksum. Protocol specific operations are performed layer

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 学术论文 > 管理论文

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号