What about how to improve it? Simple answer - take benchmarks and make sure I/O intensive applications cache more in RAM. % of read workload (raid factor *) % of write workload) Number of disks` * average I/O Operations on 1 disk per/second (Sounds like an update might be needed.). However, if the output shows close to 100 for a RAID array setup or modern SSDs - it doesn’t reflect a performance limit. Close to 100% for a while for serial devices is bad. It’s worth noting that the %util column from the output above is the percentage over the time period that I/O requests were issued to the device. Number of sectors written to the device per second. But, iotop (used to see I/O for processes) is, so alternatively you can work backwards from ~]# iostat -x 2 1 Seeing how much I/O is happening by device can be done with iostat - this isn’t built in. To keep investigating we can stay focussed on the disk. This really narrows stuff down some, one or all of those tasks are causing the high wait. How to investigate? From the top output we can also see that there are Written by Nick Otter.Written by Nick Otter. Hardware interrupts ( hi) also look high - this is an instruction from disk to CPU. There’s activity in sys (which is time spent running kernel space processes) could be expected as we’ve seen high wait - user space can’t access the disk. So (this is a 1 core system and top increments percentage by cores - number of cores × 100%), over the last period the CPU spent 55.9 % of it waiting on the disk (could be 111.8% if it was a 2 core system…). Percent of time the CPU(s) is waiting for disk completion over the last sampling period. 7250.4 avail Memĥ5.9 wa - this is I/O wait. Tasks: 257 total, 3 running, 254 sleeping, 0 stopped, 0 zombie Let’s use top to see if I/O wait is causing the ~]# top Overview of disk activity monitoring tools Action We’ll look at all of these in this article.
0 Comments
Leave a Reply. |