SSD RAID Performance in RAID 0, RAID 5, RAID 6 and RAID 10
Theoretical understanding of RAID performance is easy to understand - RAID0 offers the highest performance, followed by RAID 10, RAID 5, and RAID 6. However, SSDs behave differently in real-world scenarios. To assess the actual performance of SSD RAID across various RAID levels, I setup multiple RAIDs and tested them using FIO.
Setup for SSD RAID level Speed Comparison
Hardware used was
- QNAP TS-473e
- 4 x Samsung EVO 870 500GB
Further , the SSDs had 70-80% free space during testing, resulting in performance close to Fresh Out of the Box (FOB) conditions, with enough fresh blocks for writing. For Test I used a block size of 64k, numjobs 16 and iodepth 16, mimicking a NAS environment with multiple users connected simultaneously and reading/writing multiple jobs to NAS.
Theorotical performance of RAID vs Actual SSD RAID Performance
Theoretically, you would have expected to see the following numbers taking 470 MB/s as read and 2000 MB/s as write speed of a single SSD, based on the average of actual test results .
Theoretical RAID Performance | ||
Theoretical Read (MB/S) 4 Drives @ 470 MB/s | Theoretical Write (MB/s) 4 Drives @ 200 MB/s | |
RAID 0 | 4 * 470 = 1880 | 4*200 =800 |
RAID 10 | 4 * 470 = 1880 | 2*200 = 400 |
RAID 5 | 3 * 470 = 1410 | 1*200 = 200 |
RAID 6 | 2 * 470 = 940 | 0.66 *200 = 132 |
The test results had many anomalies - specifically read speed of RAID 6, write speed of RAID 10, and the read speed of RAID 0.
Actual Read (MB/s) | Actual Write (MB/s) | |
RAID 0 | 1142 | 687 |
RAID 10 | 1474 | 627 |
RAID 5 | 1461 | 231 |
RAID 6 | 1523 | 209 |
Exceptional Read/Write speeds on RAID0 and RAID6 SSD Array
SSD RAID Array exhibited the anticipated peak write performance in RAID0. However, the read performance displayed some variability. Unlike traditional spinning drives where RAID 0 results in high read performance, SSDs showed a different pattern. The deviation from expected high read performance in RAID 0 with SSDs is difficult for me to interpret, I can only think its because of the controller being busy at that time.
Similary RAID6 showed a very high read speed which I can not interpret. It should be the lowest but in my results, it showed the highest. RAID0 read speeds also were very low.
Fio output for SSD array performance in RAID 0, RAID 10, RAID5 and RAID 6
If you are curious or more technically inclined, here are the output files from fio showing all the details like CPU usage, IOPS and individual job details. To simplify I used the average of all jobs in my comparison table