Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf

上传人:小** 文档编号:90955568 上传时间:2019-06-20 格式:PDF 页数:26 大小:1.07MB
返回 下载 相关 举报
Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf_第1页
第1页 / 共26页
Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf_第2页
第2页 / 共26页
Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf_第3页
第3页 / 共26页
Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf_第4页
第4页 / 共26页
Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf_第5页
第5页 / 共26页
点击查看更多>>
资源描述

《Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf》由会员分享,可在线阅读,更多相关《Computer Organization and Design CH05_Solution 计算机组成与设计 第五版 第五章 答案.pdf(26页珍藏版)》请在金锄头文库上搜索。

1、Solutions 5 Computer Organization and Design 5th Edition Chapter 5 Solutions S-3 5.1 5.1.1 4 5.1.2 I, J 5.1.3 AIJ 5.1.4 3596 8 800/4 288/4 8000/4 5.1.5 I, J 5.1.6 A(J, I) 5.2 5.2.1 Word Address Binary AddressTagIndexHit/Miss 30000 001103M 1801011 0100114M 430010 1011211M 20000 001002M 1911011 111111

2、15M 880101 100058M 1901011 11101114M 140000 1110014M 1811011 0101115M 440010 1100212M 1861011 10101110M 2531111 11011513M 5.2.2 Word Address Binary AddressTagIndexHit/Miss 30000 001101M 1801011 0100112M 430010 101125M 20000 001001H 1911011 1111117M 880101 100054M 1901011 1110117H 140000 111007M 1811

3、011 0101112H 440010 110026M 1861011 1010115M 2531111 1101156M S-4 Chapter 5 Solutions 5.2.3 Cache 1Cache 2Cache 3 Word Address Binary AddressTagindex hit/missindexhit/miss index hit/miss 30000 001103M1M0M 1801011 0100224M2M1M 430010 101153M1M0M 20000 001002M1M0M 1911011 1111237M3M1M 880101 1000110M0

4、M0M 1901011 1110236M3H1H 140000 111016M3M1M 1811011 0101225M2H1M 440010 110054M2M1M 1861011 1010232M1M0M 2531111 1101315M2M1M Cache 1 miss rate 100% Cache 1 total cycles 12 25 12 2 324 Cache 2 miss rate 10/12 83% Cache 2 total cycles 10 25 12 3 286 Cache 3 miss rate 11/12 92% Cache 3 total cycles 11

5、 25 12 5 335 Cache 2 provides the best performance. 5.2.4 First we must compute the number of cache blocks in the initial cache confi guration. For this, we divide 32 KiB by 4 (for the number of bytes per word) and again by 2 (for the number of words per block). Th is gives us 4096 blocks and a resu

6、lting index fi eld width of 12 bits. We also have a word off set size of 1 bit and a byte off set size of 2 bits. Th is gives us a tag fi eld size of 32 15 17 bits. Th ese tag bits, along with one valid bit per block, will require 18 4096 73728 bits or 9216 bytes. Th e total cache size is thus 9216

7、32768 41984 bytes. Th e total cache size can be generalized to totalsize datasize (validbitsize tagsize) blocks totalsize 41984 datasize blocks blocksize wordsize wordsize 4 tagsize 32 log2(blocks) log2(blocksize) log2(wordsize) validbitsize 1 Chapter 5 Solutions S-5 Increasing from 2-word blocks to

8、 16-word blocks will reduce the tag size from 17 bits to 14 bits. In order to determine the number of blocks, we solve the inequality: 41984 64 blocks 15 blocks Solving this inequality gives us 531 blocks, and rounding to the next power of two gives us a 1024-block cache. Th e larger block size may

9、require an increased hit time and an increased miss penalty than the original cache. Th e fewer number of blocks may cause a higher confl ict miss rate than the original cache. 5.2.5 Associative caches are designed to reduce the rate of confl ict misses. As such, a sequence of read requests with the

10、 same 12-bit index fi eld but a diff erent tag fi eld will generate many misses. For the cache described above, the sequence 0, 32768, 0, 32768, 0, 32768, , would miss on every access, while a 2-way set associate cache with LRU replacement, even one with a signifi cantly smaller overall capacity, wo

11、uld hit on every access aft er the fi rst two. 5.2.6 Yes, it is possible to use this function to index the cache. However, information about the fi ve bits is lost because the bits are XORd, so you must include more tag bits to identify the address in the cache. 5.3 5.3.1 8 5.3.2 32 5.3.3 1 (22/8/32

12、) 1.086 5.3.4 3 5.3.5 0.25 5.3.6 Index, tag, data 0000012, 0001 2, mem1024 0000012, 0011 2, mem16 0010112, 0000 2, mem176 0010002, 0010 2, mem2176 0011102, 0000 2, mem224 0010102, 0000 2, mem160 S-6 Chapter 5 Solutions 5.4 5.4.1 Th e L1 cache has a low write miss penalty while the L2 cache has a hig

13、h write miss penalty. A write buff er between the L1 and L2 cache would hide the write miss latency of the L2 cache. Th e L2 cache would benefi t from write buff ers when replacing a dirty block, since the new block would be read in before the dirty block is physically written to memory. 5.4.2 On an

14、 L1 write miss, the word is written directly to L2 without bringing its block into the L1 cache. If this results in an L2 miss, its block must be brought into the L2 cache, possibly replacing a dirty block which must fi rst be written to memory. 5.4.3 Aft er an L1 write miss, the block will reside i

15、n L2 but not in L1. A subsequent read miss on the same block will require that the block in L2 be written back to memory, transferred to L1, and invalidated in L2. 5.4.4 One in four instructions is a data read, one in ten instructions is a data write. For a CPI of 2, there are 0.5 instruction access

16、es per cycle, 12.5% of cycles will require a data read, and 5% of cycles will require a data write. Th e instruction bandwidth is thus (0.0030 64) 0.5 0.096 bytes/cycle. Th e data read bandwidth is thus 0.02 (0.130.050) 64 0.23 bytes/cycle. Th e total read bandwidth requirement is 0.33 bytes/cycle. Th e data write bandwidth requirement is 0.05 4 0.2 bytes/cycle. 5.4.5 Th e instruction and data read bandwidth requirement is the same as in 5.4.4. Th e data write bandwidth require

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 商业/管理/HR > 管理学资料

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号