I/O Performance of Hard Disks with SSDs and Fusion-io ioDrive
Please note that this article / this category refers either on older software / hardware components or is no longer maintained for other reasons. This page is no longer updated and is purely for reference purposes still here in the archive available. |
---|
In the I/O performance Test we compare the I/O performance of hard disks with SSDs and a Fusion-io ioDrive with fio. We also test each one of the RAID5 4 disks and 4 SSDs. For more information on performing I/O tests, see the article Measuring I/O Performance.
Summary
Our tests have yielded the following results:
- A few hard drives, as well as, RAID systems with hard disks have a higher IOPS and throughput performance when starting the device than at the end. Since hard disks are described externally and moving internally, the beginning of the disk is located on the outer tracks. Reasons for higher performance at the beginning is because of the shorter, necessary ways of the read/write head (IOPS performance), as well as the higher line speed Bahngeschwindigkeit on the outer tracks at a constant rotational speed of the hard drive (throughput performance).
- Flash Memory (SSD, Fusion-io ioDrive) allow much higher Input/Outputs per second compared to hard drives. Especially with randomly distributed accesses, flash memory disks read clearly, because there are no mechanical movements necessary as in hard drive (read/write head movement).
- We have built into our test script a 30-second pause between the individual tests. This pause is used to avoid distortion throughput writing tests measurements for flash-based storage. In continuous time writing tests the SSD controller would not have enough time for Garbage Collection and to free up blocks for creation (see also Spare Area). Also continuous time writing tests do not corresponded well to realistic load conditions.
- The cost per GByte is significantly higher than traditional Flash Memory hard drives.
- Depending on requirements, both hard drives and flash memory continue to have their justification.
Test System
We have performed all the tests on the following server systems:
Testsystem | |
---|---|
Chassis | SC846 SAS2 (Backplane with Expander-Chip) |
Motherboard | Supermicro X8DT3-F, BIOS-Version 1.1b |
CPU | 2 x Intel Xeon X5680 |
RAM | 12x4 GByte |
RAID-Controller | 3ware 9750 SAS2, Firmware Version: FH9X 5.08.00.008, without BBU |
Tests
WDC WD5002ABYS Hard Disks
Write Cache off
Write Cache on
Intel SSDSA2M160G2GC SSD
Write Cache off - Secure Erase
- Before the test, we conducted a Secure Erase on the SSD.
Write Cache off
- Before performing the test, we have thoroughly written the device several times.
- The measured writing values thus correspond to a worst-case scenario.
Write Cache on
- Before performing the test, we have thoroughly written the device several times.
- The measured writing values thus correspond to a worst-case scenario.
RAID5 of four HUS156060VL HITACHI Hard Drives
- Test configuration: 3ware-Hitachi-HUS156060VL-RAID5-config.txt
- The high IOPS values for sequential accesses are read-ahead (for reading) and reached by the cache of the RAID controller (for writing).
- The IOPS performance is higher when only the front part of the device will be used. Only the outer tracks of each disk is used.
- The throughput performance is also higher when only the front part of the device is used.
RAID5 of four ATP Velocity SII SSDs
- Test configuration: 3ware-ATP-Velocity-SII-SSD-RAID5-config.txt
- The randomly distributed random tests display that SSDs have a significantly higher performance than hard disks.
Fusion-io ioDrive
- Test configuration: Fusion-FS1-002-321-CS-config.txt
- Before performing the test, we have thoroughly written the device several times.
- With longer intervals between tests, we were able to observe higher throughput rates as this test shows. Previously we have fully written the device without interruption. The flow rate during the subsequent letter (Random throughput test with 64 jobs) was 161 MB/s, rising after a short break but returned to 397 MB/s and then to 488 MB/s. Apparently, the controller clears the invalid pages in the spare area, which can then be available for direct writing in the meantime. During normal load behavior, being declined due to the throughput rate is rare. You can therefore reach considerably higher values than our tests have shown here.
Additional testing with the Intel SSD 320 Series
Testsystem
We conducted further tests with the following server systems:
Testsystem | |
---|---|
Chassis | SC826Server (Nehalem) (Backplane with Expander-Chip) |
Motherboard | Supermicro X8DT3-LN4F, BIOS-Version 1.0c |
CPU | 2x Intel Xeon Quad Core X5570 2.93GHz 8MB 6.4GT/s (Nehalem) |
RAM | 48 GB ECC Registered DDR3 RAM 2 Rank ATP (12x 4096 MB) |
RAID-Controller | Adaptec ASR5405Z 4x SAS/SATA (0,1,1E,5,5EE,6,10,50,60) incl. ZMCP, Firmware Version: 5.2-0 (18252) |
Tests
The SSD is connected directly to the Adaptec RAID controller. Since this SSD series has taken precautions against data loss resulting from a power failure, the write cache of SSD is set to "Enabled (write-back)". From the SSD a "Simple Volume" is created with the RAID controller.
Intel SSDSA2CW160G310 SSD (Intel 320 Series SSDs)
Logical device Read-cache and Write-cache "enabled"
Logical device Read-cache and Write-cache "disabled"
References
Author: Werner Fischer Werner Fischer, working in the Knowledge Transfer team at Thomas-Krenn, completed his studies of Computer and Media Security at FH Hagenberg in Austria. He is a regular speaker at many conferences like LinuxTag, OSMC, OSDC, LinuxCon, and author for various IT magazines. In his spare time he enjoys playing the piano and training for a good result at the annual Linz marathon relay.
|