У меня есть настройка экземпляра с 8 дисками ebs, установленными на разделенном логическом томе размером 8 ТБ в общей сложности на экземпляре m4.4xlarge, я тестировал IOPS в такой настройке, и мне интересно, почему я получаю более низкие IOPS, чем что возможно, если я правильно сделал расчет, я должен получить в общей сложности 24000 операций ввода-вывода в секунду с 8 дисками ebs, каждый по 1000 ГБ.
Вот что я получаю, когда fio с размером блока чтения 8 КБ:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=8k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=100
test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): https://forums.aws.amazon.com/ http://100.0% done 95776KB/0KB/0KB /s http://11.1K/0/0 iops https://forums.aws.amazon.com/
test: (groupid=0, jobs=1): err= 0: pid=83382: Thu Jul 20 11:14:30 2017
read : io=4096.0MB, bw=155419KB/s, iops=19427, runt= 26987msec
cpu : usr=1.63%, sys=12.23%, ctx=48412, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=524288/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=4096.0MB, aggrb=155419KB/s, minb=155419KB/s, maxb=155419KB/s, mint=26987msec, maxt=26987msec
Disk stats (read/write):
dm-0: ios=533358/544, merge=0/0, ticks=1070876/1552, in_queue=1073072, util=100.00%, aggrios=66993/55, aggrmerge=13/13, aggrticks=131787/158, aggrin_queue=131943, aggrutil=52.90%
xvdf: ios=66999/43, merge=7/11, ticks=109968/52, in_queue=110020, util=36.31%
xvdg: ios=66943/58, merge=6/12, ticks=28244/148, in_queue=28388, util=27.43%
xvdh: ios=66939/50, merge=43/11, ticks=23224/20, in_queue=23252, util=25.90%
xvdi: ios=66965/49, merge=4/17, ticks=38348/48, in_queue=38396, util=28.43%
xvdj: ios=66937/54, merge=10/12, ticks=282140/140, in_queue=282276, util=48.91%
xvdk: ios=67009/58, merge=4/18, ticks=38392/136, in_queue=38524, util=28.71%
xvdl: ios=67101/91, merge=24/13, ticks=309980/500, in_queue=310468, util=52.90%
xvdm: ios=67053/44, merge=7/17, ticks=224000/224, in_queue=224224, util=46.26%
При тестировании с размером блока 4k я получаю:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=100
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): https://forums.aws.amazon.com/ http://100.0% done 99728KB/0KB/0KB /s http://24.1K/0/0 iops https://forums.aws.amazon.com/
test: (groupid=0, jobs=1): err= 0: pid=75399: Thu Jul 20 10:52:00 2017
read : io=4096.0MB, bw=95909KB/s, iops=23977, runt= 43732msec
cpu : usr=1.71%, sys=15.71%, ctx=83861, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=4096.0MB, aggrb=95909KB/s, minb=95909KB/s, maxb=95909KB/s, mint=43732msec, maxt=43732msec
Disk stats (read/write):
dm-0: ios=1096214/3350, merge=0/0, ticks=1753128/9584, in_queue=1767264, util=100.00%, aggrios=137710/400, aggrmerge=31/34, aggrticks=215537/1126, aggrin_queue=216646, aggrutil=59.81%
xvdf: ios=137497/407, merge=9/47, ticks=137144/492, in_queue=137620, util=39.09%
xvdg: ios=137500/443, merge=28/37, ticks=200424/880, in_queue=201300, util=43.21%
xvdh: ios=137603/441, merge=42/31, ticks=47364/696, in_queue=48040, util=34.31%
xvdi: ios=137891/418, merge=27/34, ticks=536316/1484, in_queue=537768, util=59.81%
xvdj: ios=137742/381, merge=22/31, ticks=518500/2232, in_queue=520720, util=59.10%
xvdk: ios=137837/344, merge=34/34, ticks=163696/1544, in_queue=165260, util=40.42%
xvdl: ios=137792/381, merge=71/36, ticks=47736/880, in_queue=48580, util=34.10%
xvdm: ios=137825/385, merge=15/23, ticks=73116/804, in_queue=73884, util=35.07%
я значительно ниже пропускной способности для m4.x4large экземпляр 250 МБ, как указано здесь: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html
Фактически, для Тест чтения блока 8kb, я получаю 20k IOPS, с пропускной способностью ~ 155 МБ. Есть идеи, почему я не могу увеличить число операций ввода-вывода в секунду до 24 КБ при чтении 8 КБ?