Differences between hardware RAID and Linux software RAID

From Thomas-Krenn-Wiki
Jump to navigation Jump to search

For Linux, the question often arises if a hardware RAID Controller or Linux Software RAID should be used for the storage subnet. This article shows the differences between hardware RAID and Linux software RAID.

Comparison table

HW RAID SW RAID
Controller

HW RAID Controller
(z.B. Adaptec or Avago MegaRAID (formerly LSI))

SATA ports of chipset or SAS HBA
(z.B. Avago HBAs (formerly LSI))

Setup setup of HW-RAID setup of SW-RAID
RAID-calculations in HW-RAID-chip in CPU
Writing
  • Writing accesses are temporarily stored in the cache of the RAID controller.
  • In this moment, the operating system is getting back the write acknowledge.
  • After this, the data is written to HDD/SDD.
  • Hint: The cache must be protected by BBU, Adaptec ZMCP or LSI CacheVault
  • Writing accesses are directly written to HDD/SDD.
  • For RAID 5 and RAID 6, it comes to writing accesses that do not go over the full stripe to write penalty[1][2]
  • RAID 10 is recommended to avoid write penalty.[3][4][5][6]
  • Hint: When using HDDs, their caches must be disabled to prevent data loss in the event of a power failure. (see Linux Software RAID#Deactivate hard disk cache). This leads to a reduced writing performance. The use of enterprise SSDs with power loss protection is ideal here, as their caches can remain activated.
Reading
  • usually directly from HDD/SSD
  • Notification: RAID Cache Hits are rare, as the typical cache size of 1 GB is small compared to the complete RAID set. Furthermore, the Linux Page Cache (unused RAM, which is used for caching reading operations) is often larger than 1 GB.
  • directly from HDD/SSD
Supported operating systems

dependent on RAID controller, mostly usually an extensive list of operating systems:

  • Linux
  • Windows
  • VMware
  • ...
  • Linux
SSD support
  • SSDs are supported
Booting from RAID
  • booting of every RAID level possible
  • The GRUB2 Bootloader supports Linux Software RAID (mdraid) and LVM.[7] There is only the restriciton that the GRUB environment block can not be used then.[8]
  • When booting from a RAID1 using UEFI, the boot entry for the second HDD/SSD must be restored after updating grub-efi-amd64.]].

References


Author: Werner Fischer

Werner Fischer, working in the Knowledge Transfer team at Thomas-Krenn, completed his studies of Computer and Media Security at FH Hagenberg in Austria. He is a regular speaker at many conferences like LinuxTag, OSMC, OSDC, LinuxCon, and author for various IT magazines. In his spare time he enjoys playing the piano and training for a good result at the annual Linz marathon relay.


Translator: Alina Ranzinger

Alina has been working at Thomas-Krenn.AG since 2024. After her training as multilingual business assistant, she got her job as assistant of the Product Management and is responsible for the translation of texts and for the organisation of the department.


Related articles

Mounting a Windows Share in Linux
Using rdiff-backup under Linux
Vim file management of remote hosts with netrw