If you’re reading this, you’ve probably heard of the terms I/Os per second, or “IOPS”, but why are they important to you? Well, a storage system measures its performance based on IOPS, or storage iops, which is essentially the transfer rate and input/output per second. It’s important to understand the performance specs of a storage system, as when it’s bombarded with requests for data, the I/O capacity can run out before it reaches full data capacity. Therefore, knowing the IOPS of a system is vital to fulfilling its true potential.
How we measure
Transfer rates are measured by how fast the data is transferred from various storage locations, or via sequential reads and writes. Operations are determined in Megabyte per second (MB/s) and are mostly associated with larger files referenced by static, unchanging data. This is where IOPS come in; they’re measured as an integer, and point to a maximum number of reads and writes to non-contiguous storage locations. These are dominated by “seek time”, which is the time it can take for a disk drive to position its read and write heads over the right location on-disk. IOPS tend to be associated with smaller files and continuous changes, and make up the workloads most typical in enterprise data applications.
A typical disk drive provides less than 200 IOPS, so as an example, a storage system handling 10 GB application requiring reading ten 1GB files could take 100 seconds – the transfer rate would be 100MB/s and it’d consume 10 IOPS. But, if the same application with the same variables were to consume 1000 IOPS, chances are it’d be a lot slower than the first example. This is how high IOPS can really slow down your storage system’s performance.
Leaving file type and size aside for the moment, it’s important to note the way a software application uses a file can determine the generated workload. Think of the way an application changes files, how often it changes files, and the way it uses cache versus disk reads and writes; the processing involved and the way it’s done all affect the workload.
Storage systems have to keep up with these processes, and must provide data fast enough that these applications are not kept waiting. Good transfer rates for large image files, for example, keep things ticking over smoothly – but smaller files require more random or high IOPS, and this can cause problems long before the system runs out of full data capacity. This is why understanding workloads is essential.
Find out more about what is important for high performance cloud computing.
This formula is a good way of calculating a decently close ‘raw IOPS’ figure: IOPS = 1/(Avg Seek Time + Avg Latency).
Another way to do it is to use is 180 IOPS for a 15K RPM drive, 120 IOPS for a 10K RPM drive, 80 IOPS for a 7500 RPM drive and 40 IOPS for a 5400 RPM drive. For a storage array, multiply the individual disk IOPS by the number of spindles for the raw IOPS.
Follow this link to find out more information about iops.
IOPS Perfromance and Cloud Computing
Cloud computing creates additional challenges for maximizing IO and having the best understanding of your available resources. Platforms that are built for cloud have a finite number of IOPS available to support many workloads running on that system. An idle Microsoft Windows Server consumes approximately 30 IOPS before you being using applications. If you have an IO intensive application it is important to understand the capabilities of the platform you intend to utilize and to be sure that you have enough iops performance overhead to prevent bottlenecks.