《rbd bench-write工具使用》由会员分享,可在线阅读,更多相关《rbd bench-write工具使用(6页珍藏版)》请在金锄头文库上搜索。
1、1. rbd bench-write介绍rbd bench-write是ceph集群自带的块设备基准性能测试工具,通过librbd方式对rbd image进行块设备写性能测试。2. 环境要求 存储节点:可以直接运行不需要配置,客户端节点:要求在Centos 7上安装运行,需要安装配置好ceph环境(参照谢菲写的文档) 3. 使用说明 3.1 语法格式rbd bench-write 的语法:rbd bench-write 可选如下参数: -io-size:缺省单位 byte,默认 4096 bytes = 4K,可用单位:B、K、M、G、T; -io-threads:线程数,默认 16; -io
2、-total:总写入字节,缺省单位byte,默认值 1024M,可用单位:B、K、M、G、T; -io-pattern :写模式,seq: 顺序写;rand:随机写;默认为 seq 即顺序写;3.2 执行工具在ucs2.0集群节点、ceph客户端上都可以执行该工具测试,命令都是一样的1在ucs2.0集群节点上:#顺序写rootm-ceph-16 # rbd bench-write hot-pool/rbd-001 -io-size 512K -io-threads 10 -io-total 1G -io-pattern seq bench-write io_size 524288 io_thr
3、eads 10 bytes 1073741824 pattern sequential SEC OPS OPS/SEC BYTES/SEC 1 236 240.45 126063600.51 2 461 234.08 122726687.52 3 709 234.22 122800112.16 4 946 238.29 124932489.11 5 1177 236.65 124070233.01 6 1413 233.43 122384527.36 7 1607 229.66 120408828.70 8 1846 230.61 120903606.70elapsed: 9 ops: 204
4、8 ops/sec: 226.21 bytes/sec: 118601369.39rootm-ceph-16 #512K * 2048 = 1024M 118601369.39byte =113.1070798778534MB 113.1M*9=1024Mceph df 查看,增加的容量(没有写之前used 17M)差不多写入的1G接近。在osd中查看对象文件大小,基本都是4M的文件rootm-ceph-14 3.7f_head# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 8977G 8799G 178G 1.99 POOLS: NAME ID
5、USED %USED MAX AVAIL OBJECTS ecpool-001 1 0 0 4889G 0 hot-pool 2 0 0 732G 0 hot-pool1 3 1169M 0.01 1464G 315#随机写rootm-ceph-16 # rbd bench-write hot-pool/rbd-001 -io-size 512K -io-threads 10 -io-total 1G -io-pattern randbench-write io_size 524288 io_threads 10 bytes 1073741824 pattern random SEC OPS
6、OPS/SEC BYTES/SEC 1 340 349.01 182982444.22 2 640 322.13 168887758.91 3 971 326.10 170970970.94 4 1293 325.24 170517651.29 5 1611 322.80 169242081.90 6 1919 314.25 164756141.12elapsed: 7 ops: 2048 ops/sec: 289.26 bytes/sec: 151653071.65rootm-ceph-16 #rootm-ceph-14 3.7f_head# ceph df GLOBAL: SIZE AVA
7、IL RAW USED %RAW USED 8977G 8800G 177G 1.98 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS ecpool-001 1 0 0 4889G 0 hot-pool 2 0 0 732G 0 hot-pool1 3 4387M 0.05 1465G 1980随机写,查看ceph df ,写1G的容量,显示的4387M,相当于4个1G的大小。对比顺序写,随机写的object 数量多很多,随机写和顺序写不同,不会写满一个对象的4M。估计ceph计算随机写容量的方式也不是很精确,随机写1G的数据,而实际显示的容量要大很多。
8、和顺序写占用空间不同。rootm-ceph-14 3.7f_head# ll -lhtotal 6.6M-rw-r-r- 1 root root 0 Sep 9 11:43 _head_0000007F_3-rw-r-r- 1 root root 1.5M Sep 10 09:34 rbdudata.16fcd6b8b4567.00000000000007a1_head_64A6557F_3-rw-r-r- 1 root root 3.0M Sep 10 09:34 rbdudata.16fcd6b8b4567.000000000000102e_head_066B877F_3-rw-r-r-
9、1 root root 2.5M Sep 10 09:34 rbdudata.16fcd6b8b4567.000000000000184a_head_3F0100FF_3-rw-r-r- 1 root root 1.0M Sep 10 09:34 rbdudata.16fcd6b8b4567.0000000000001bc3_head_FC34F2FF_3-rw-r-r- 1 root root 3.0M Sep 10 09:34 rbdudata.16fcd6b8b4567.0000000000001c8a_head_461421FF_3-rw-r-r- 1 root root 3.5M S
10、ep 10 09:34 rbdudata.16fcd6b8b4567.0000000000001cef_head_D1428CFF_3-rw-r-r- 1 root root 2.5M Sep 10 09:34 rbdudata.16fcd6b8b4567.00000000000021b5_head_F08CE3FF_3-rw-r-r- 1 root root 1.5M Sep 10 09:34 rbdudata.16fcd6b8b4567.0000000000003df8_head_942929FF_3-rw-r-r- 1 root root 512K Sep 10 09:34 rbduda
11、ta.16fcd6b8b4567.00000000000040fd_head_729BC27F_3-rw-r-r- 1 root root 1.5M Sep 10 09:34 rbdudata.16fcd6b8b4567.0000000000004548_head_CD3E817F_3-rw-r-r- 1 root root 2.0M Sep 10 09:34 rbdudata.16fcd6b8b4567.0000000000004b20_head_1FA1687F_3-rw-r-r- 1 root root 2.0M Sep 10 09:34 rbdudata.16fcd6b8b4567.0
12、000000000005a35_head_2E069D7F_32 在ceph 客户端上使用与集群节点上使用是一样的。 rootlocalhost # rbd bench-write hot-pool/rbd-001 -io-size 512K -io-threads 10 -io-total 1G -io-pattern randbench-write io_size 524288 io_threads 10 bytes 1073741824 pattern random SEC OPS OPS/SEC BYTES/SEC 1 234 235.42 123429570.92 2 441 223.54 117197450.36 3 660 221.20 115970132.90 4 882 220.54 115627205.55 5 1091 219.77 115223046.05 6 1312 215.38 112918968.63 7 1524 215.83 113155180.18 8 1742 216.71 113617677.71 9 1965 216.81