计算机组成与设计_第五版答案_chapter05_solution

上传人:suns****4568 文档编号:60808708 上传时间:2018-11-18 格式:PDF 页数:26 大小:2.01MB
返回 下载 相关 举报
计算机组成与设计_第五版答案_chapter05_solution_第1页
第1页 / 共26页
计算机组成与设计_第五版答案_chapter05_solution_第2页
第2页 / 共26页
计算机组成与设计_第五版答案_chapter05_solution_第3页
第3页 / 共26页
计算机组成与设计_第五版答案_chapter05_solution_第4页
第4页 / 共26页
计算机组成与设计_第五版答案_chapter05_solution_第5页
第5页 / 共26页
点击查看更多>>
资源描述

《计算机组成与设计_第五版答案_chapter05_solution》由会员分享,可在线阅读,更多相关《计算机组成与设计_第五版答案_chapter05_solution(26页珍藏版)》请在金锄头文库上搜索。

1、Solutions 5 Chapter 5 Solutions S-3 5.1 5.1.1 4 5.1.2 I, J 5.1.3 AIJ 5.1.4 3596 ? 8 ? 800/4 ? 2?8?8/4 ? 8000/4 5.1.5 I, J 5.1.6 A(J, I) 5.2 5.2.1 Word Address Binary AddressTagIndexHit/Miss 30000 001103M 1801011 0100114M 430010 1011211M 20000 001002M 1911011 11111115M 880101 100058M 1901011 11101114

2、M 140000 1110014M 1811011 0101115M 440010 1100212M 1861011 10101110M 2531111 11011513M 5.2.2 Word Address Binary AddressTagIndexHit/Miss 30000 001101M 1801011 0100112M 430010 101125M 20000 001001H 1911011 1111117M 880101 100054M 1901011 1110117H 140000 111007M 1811011 0101112H 440010 110026M 1861011

3、 1010115M 2531111 1101156M S-4 Chapter 5 Solutions 5.2.3 Cache 1Cache 2Cache 3 Word Address Binary AddressTagindexhit/missindexhit/missindexhit/miss 30000 001103M1M0M 1801011 0100224M2M1M 430010 101153M1M0M 20000 001002M1M0M 1911011 1111237M3M1M 880101 1000110M0M0M 1901011 1110236M3H1H 140000 111016

4、M3M1M 1811011 0101225M2H1M 440010 110054M2M1M 1861011 1010232M1M0M 2531111 1101315M2M1M Cache 1 miss rate ? 100% Cache 1 total cycles ? 12 ? 25 ? 12 ? 2 ? 324 Cache 2 miss rate ? 10/12 ? 83% Cache 2 total cycles ? 10 ? 25 ? 12 ? 3 ? 286 Cache 3 miss rate ? 11/12 ? 92% Cache 3 total cycles ? 11 ? 25

5、? 12 ? 5 ? 335 Cache 2 provides the best performance. 5.2.4 First we must compute the number of cache blocks in the initial cache confi guration. For this, we divide 32 KiB by 4 (for the number of bytes per word) and again by 2 (for the number of words per block). Th is gives us 4096 blocks and a re

6、sulting index fi eld width of 12 bits. We also have a word off set size of 1 bit and a byte off set size of 2 bits. Th is gives us a tag fi eld size of 32 ? 15 ? 17 bits. Th ese tag bits, along with one valid bit per block, will require 18 ? 4096 ? 73728 bits or 9216 bytes. Th e total cache size is

7、thus 9216 ? 32768 ? 41984 bytes. Th e total cache size can be generalized to totalsize ? datasize ? (validbitsize ? tagsize) ? blocks totalsize ? 41984 datasize ? blocks ? blocksize ? wordsize wordsize ? 4 tagsize ? 32 ? log2(blocks) ? log2(blocksize) ? log2(wordsize) validbitsize ? 1 Chapter 5 Solu

8、tions S-5 Increasing from 2-word blocks to 16-word blocks will reduce the tag size from 17 bits to 14 bits. In order to determine the number of blocks, we solve the inequality: 41984 ? 64 ? blocks ? 15 ? blocks Solving this inequality gives us 531 blocks, and rounding to the next power of two gives

9、us a 1024-block cache. Th e larger block size may require an increased hit time and an increased miss penalty than the original cache. Th e fewer number of blocks may cause a higher confl ict miss rate than the original cache. 5.2.5 Associative caches are designed to reduce the rate of confl ict mis

10、ses. As such, a sequence of read requests with the same 12-bit index fi eld but a diff erent tag fi eld will generate many misses. For the cache described above, the sequence 0, 32768, 0, 32768, 0, 32768, , would miss on every access, while a 2-way set associate cache with LRU replacement, even one

11、with a signifi cantly smaller overall capacity, would hit on every access aft er the fi rst two. 5.2.6 Yes, it is possible to use this function to index the cache. However, information about the fi ve bits is lost because the bits are XOR d, so you must include more tag bits to identify the address

12、in the cache. 5.3 5.3.1 8 5.3.2 32 5.3.3 1? (22/8/32) ? 1.086 5.3.4 3 5.3.5 0.25 5.3.6 ?Index, tag, data? ?0000012, 00012, mem1024? ?0000012, 00112, mem16? ?0010112, 00002, mem176? ?0010002, 00102, mem2176? ?0011102, 00002, mem224? ?0010102, 00002, mem160? S-6 Chapter 5 Solutions 5.4 5.4.1 Th e L1 c

13、ache has a low write miss penalty while the L2 cache has a high write miss penalty. A write buff er between the L1 and L2 cache would hide the write miss latency of the L2 cache. Th e L2 cache would benefi t from write buff ers when replacing a dirty block, since the new block would be read in befor

14、e the dirty block is physically written to memory. 5.4.2 On an L1 write miss, the word is written directly to L2 without bringing its block into the L1 cache. If this results in an L2 miss, its block must be brought into the L2 cache, possibly replacing a dirty block which must fi rst be written to

15、memory. 5.4.3 Aft er an L1 write miss, the block will reside in L2 but not in L1. A subsequent read miss on the same block will require that the block in L2 be written back to memory, transferred to L1, and invalidated in L2. 5.4.4 One in four instructions is a data read, one in ten instructions is

16、a data write. For a CPI of 2, there are 0.5 instruction accesses per cycle, 12.5% of cycles will require a data read, and 5% of cycles will require a data write. Th e instruction bandwidth is thus (0.0030 ? 64) ? 0.5 ? 0.096 bytes/cycle. Th e data read bandwidth is thus 0.02 ? (0.13?0.050) ? 64 ? 0.23 bytes/cycle. Th e total read bandwidth requirement is 0.33 bytes/cycle. Th e data write bandwidth requirement is 0.05 ? 4 ? 0.2 bytes/cycle. 5.4.5 Th e instruction and data read

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 高等教育 > 其它相关文档

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号