Buffered and cached disk reads with hdparm.
Seek time performance with python seeker.
Additional performance with fio. fio tutorial.
Tests were performed on a XenServer 6.0.2 guest Debian 6 Squeeze 64 bit.
There was no significant difference when enabling HDD Write Back Cache in IMS management module.
xvda – Intel 320 SSD RAID1
xvdb – Seagate Savvio 10K.5 ST9900805SS SAS RAID1
hdparm:
# hdparm -tT /dev/xvda /dev/xvda: Timing cached reads: 13578 MB in 1.99 seconds = 6824.44 MB/sec Timing buffered disk reads: 1422 MB in 3.00 seconds = 473.73 MB/sec
# hdparm -tT /dev/xvdb /dev/xvdb: Timing cached reads: 13618 MB in 1.99 seconds = 6844.60 MB/sec Timing buffered disk reads: 1322 MB in 3.00 seconds = 440.17 MB/sec
seeker.py:
# ./seeker.py /dev/xvda Benchmarking /dev/xvda [10.00 GB] 10/0.00 = 3310 seeks/second 0.30 ms random access time 100/0.02 = 5053 seeks/second 0.20 ms random access time 1000/0.18 = 5635 seeks/second 0.18 ms random access time 10000/1.82 = 5498 seeks/second 0.18 ms random access time 100000/17.14 = 5834 seeks/second 0.17 ms random access time
# ./seeker.py /dev/xvdb Benchmarking /dev/xvdb [20.00 GB] 10/0.00 = 2529 seeks/second 0.40 ms random access time 100/0.01 = 8396 seeks/second 0.12 ms random access time 1000/0.13 = 7611 seeks/second 0.13 ms random access time 10000/1.16 = 8620 seeks/second 0.12 ms random access time 100000/11.25 = 8890 seeks/second 0.11 ms random access time
fio:
# cat random-read-test-xvda.fio [random-read] rw=randread size=128m directory=/tmp
# fio random-read-test-xvda.fio random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1 Starting 1 process random-read: Laying out IO file(s) (1 file(s) / 128MB) Jobs: 1 (f=1): [r] [100.0% done] [7000K/0K /s] [1709/0 iops] [eta 00m:00s] random-read: (groupid=0, jobs=1): err= 0: pid=11333 read : io=131072KB, bw=7277KB/s, iops=1819, runt= 18012msec clat (usec): min=6, max=6821, avg=541.36, stdev=112.69 bw (KB/s) : min= 6552, max=10144, per=100.18%, avg=7289.37, stdev=840.90 cpu : usr=0.44%, sys=6.86%, ctx=32882, majf=0, minf=24 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=32768/0, short=0/0 lat (usec): 10=0.01%, 250=0.01%, 500=17.30%, 750=81.96%, 1000=0.66% lat (msec): 2=0.05%, 4=0.01%, 10=0.01% Run status group 0 (all jobs): READ: io=131072KB, aggrb=7276KB/s, minb=7451KB/s, maxb=7451KB/s, mint=18012msec, maxt=18012msec Disk stats (read/write): xvda: ios=32561/3, merge=0/7, ticks=17340/0, in_queue=17340, util=96.44%
# cat random-read-test-xvdb.fio [random-read] rw=randread size=128m directory=/home
# fio random-read-test-xvdb.fio random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1 Starting 1 process random-read: Laying out IO file(s) (1 file(s) / 128MB) Jobs: 1 (f=1): [r] [100.0% done] [6688K/0K /s] [1633/0 iops] [eta 00m:00s] random-read: (groupid=0, jobs=1): err= 0: pid=11525 read : io=131072KB, bw=7050KB/s, iops=1762, runt= 18592msec clat (usec): min=252, max=36782, avg=558.96, stdev=289.86 bw (KB/s) : min= 5832, max= 9416, per=100.13%, avg=7057.95, stdev=733.93 cpu : usr=0.90%, sys=6.39%, ctx=32905, majf=0, minf=24 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=32768/0, short=0/0 lat (usec): 500=19.77%, 750=79.20%, 1000=0.79% lat (msec): 2=0.19%, 4=0.02%, 10=0.02%, 20=0.01%, 50=0.01% Run status group 0 (all jobs): READ: io=131072KB, aggrb=7049KB/s, minb=7219KB/s, maxb=7219KB/s, mint=18592msec, maxt=18592msec Disk stats (read/write): xvdb: ios=32447/38, merge=0/4, ticks=17836/1012, in_queue=18848, util=96.56%
# cat random-read-test-aio-xvda.fio [random-read] rw=randread size=128m directory=/tmp ioengine=libaio iodepth=8 direct=1 invalidate=1
# fio random-read-test-aio-xvda.fio random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=8 Starting 1 process Jobs: 1 (f=1): [r] [100.0% done] [33992K/0K /s] [8299/0 iops] [eta 00m:00s] random-read: (groupid=0, jobs=1): err= 0: pid=12384 read : io=131072KB, bw=37567KB/s, iops=9391, runt= 3489msec slat (usec): min=5, max=65, avg= 9.79, stdev= 1.79 clat (usec): min=427, max=42452, avg=833.25, stdev=714.35 bw (KB/s) : min=31520, max=42024, per=101.20%, avg=38017.33, stdev=4144.75 cpu : usr=6.19%, sys=12.39%, ctx=29265, majf=0, minf=32 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=32768/0, short=0/0 lat (usec): 500=2.48%, 750=42.14%, 1000=38.11% lat (msec): 2=16.97%, 4=0.19%, 10=0.08%, 50=0.02% Run status group 0 (all jobs): READ: io=131072KB, aggrb=37567KB/s, minb=38468KB/s, maxb=38468KB/s, mint=3489msec, maxt=3489msec Disk stats (read/write): xvda: ios=32017/0, merge=5/6, ticks=26760/0, in_queue=28344, util=96.71%
# cat random-read-test-aio-xvdb.fio [random-read] rw=randread size=128m directory=/home ioengine=libaio iodepth=8 direct=1 invalidate=1
# fio random-read-test-aio-xvdb.fio random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=8 Starting 1 process Jobs: 1 (f=1): [r] [75.0% done] [43073K/0K /s] [11K/0 iops] [eta 00m:01s] random-read: (groupid=0, jobs=1): err= 0: pid=12464 read : io=131072KB, bw=41783KB/s, iops=10445, runt= 3137msec slat (usec): min=7, max=409, avg=11.79, stdev= 3.66 clat (usec): min=274, max=1464, avg=745.43, stdev=141.13 bw (KB/s) : min=40416, max=42096, per=99.99%, avg=41777.33, stdev=668.21 cpu : usr=1.40%, sys=21.94%, ctx=27542, majf=0, minf=32 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=32768/0, short=0/0 lat (usec): 500=2.23%, 750=49.39%, 1000=46.19% lat (msec): 2=2.19% Run status group 0 (all jobs): READ: io=131072KB, aggrb=41782KB/s, minb=42785KB/s, maxb=42785KB/s, mint=3137msec, maxt=3137msec Disk stats (read/write): xvdb: ios=30272/0, merge=0/0, ticks=22772/0, in_queue=22772, util=96.17%
SSD performance should be much better than SAS but is not outperforming it.
I guess this might be a controller limitation.
Please comment about your experience with IMS disk performance.
Just testing the Intel 520 SSDs in a raid10 vs. Toshiba 600GB disks in raid5. The SSDs should perform much better, but do not. About 10-20% performance gain max…
Cheers,
Björn