Our valued sponsor

Diving Deep into OPSEC: Let's Guard Our Data

I suggest using RAID (level 1 or higher) for data security, and be careful with backups as they can themselves become an additional point of failure if they fall in the wrong hands.

RAID may be used for security via virtual disk encryption but its purpose is regarding performance, fault tolerance and redundancy thru multiple drives and schemes - beside other specifics.

Whilst RAID 1 offers only mirroring - thus achieving primary purpose - if performance is not requirement, RAID 5/6 is far more fault tolerant due to its distributed parity mechanism. If performance is a requirement, RAID 50/60 can be used which adds additional drives for performance reasons. In both cases, hot spare drives are advised, so in case of fault, the RAID array can be promptly recreated.

RAID 10 is quite basic and not quite fault tolerant so should not be used for any even remotely sensitive storage. RAID 0 - stripping - is a fool's friend.

Ofc, hardware RAID modules should be preferred wherever possible as RAID O/S implementation may lead to serious data loss.
 
Ofc, hardware RAID modules should be preferred wherever possible as RAID O/S implementation may lead to serious data loss.
well this is controversial :D
of course there are memes like this floating around
zfs.png

but in my experience hardware RAID controllers bring more pain in the a*s than stability or comfort hence I always stick to the software RAID.
 
well this is controversial :D
of course there are memes like this floating around
View attachment 7902
but in my experience hardware RAID controllers bring more pain in the a*s than stability or comfort hence I always stick to the software RAID.

Considering that HW RAID modules are reserved for either servers or desktop workstations, majority of end users will use mainboard integrated or O/S level RAID.

HW RAID implementations wise, it depends on module and drive vendor, so your statement is quite correct. The least problems in production we had is with DELL PERC series and DELL drives.

But, RAID firmware is essentially stripped down Linux - whatever the implementation, the significant difference between them is only about type and size of cache memory - like in DELL BOSS module.

Whether ZFS, LVM or mdadm, an O/S level RAID will utilize - and thus not been limited to controller's resources - system resources - which can degrade system performances in the end.

So, for desktop workstation and server use case, HW RAID would be the optimal solution.

End users should know that by using any RAID implementation - apart from RAID 0 and its variants - there are no performance benefits. It's about resilience and continuity. Also, RAID does not substitute backup.
 
  • Like
Reactions: 0xDEADBEEF
HW RAID implementations wise, it depends on module and drive vendor, so your statement is quite correct. The least problems in production we had is with DELL PERC series and DELL drives.
and I've had Dell branded controllers not accept Dell branded drives "because f**k you thats why", and that's the largest problem with the hardware RAID: you get vendor-locked and are bound to one particular manufacturer and have to buy branded drives for 5x-10x of the unbranded drives price + have to choose the exact models as they might not be accepted by the card even if the drives have the same brand written on the sticker.
but still I had less problems with Dell branded controllers than with HP, probably because HP uses their own hardware whle Dell just rebrands common LSI RAID cards which are much better tested because they have much more customers (Dell, Supermicro, Fujitsu, Lenovo, Intel, ...) than HP alone.
But, RAID firmware is essentially stripped down Linux
it's not Linux but some kind of a real time operating system, at least on LSI/Dell cards.
and the most significant difference between them is their embedded list of "allowed" drive models :D
Whether ZFS, LVM or mdadm, an O/S level RAID will utilize - and thus not been limited to controller's resources - system resources - which can degrade system performances in the end.
while ZFS is resourse intensive it does have its benefits, one need to fully understand whether to use ZFS or not.
LVM or mdadm impact on system performance is negligible, unless we are talking about very old hardware.
RAID does not substitute backup
100%
 
Last edited:
and I've had Dell branded controllers not accept Dell branded drives "because f**k you thats why", and that's the largest problem with the hardware RAID: you get vendor-locked and are bound to one particular manufacturer and have to buy branded drives for 5x-10x of the unbranded drives price + have to choose the exact models as they might not be accepted by the card even if the drives have the same brand written on the sticker.
but still I had less problems with Dell branded controllers than with HP, probably because HP uses their own hardware whle Dell just rebrands common LSI RAID cards which are much better tested because they have much more customers (Dell, Supermicro, Fujitsu, Lenovo, Intel, ...) than HP alone.

it's not Linux but some kind of a real time operating system, at least on LSI/Dell cards.
and the most significant difference between them is their embedded list of "allowed" drive models :D

while ZFS is resourse intensive it does have its benefits, one need to fully understand whether to use ZFS or not.
LVM or mdadm impact on system performance is negligible, unless we are talking about very old hardware.

100%

Five years ago, we decided to cease further use of HPE server machines and its hardware ecosystem for exactly the same reason you wrote in first paragraph ns2

I assume that it's a matter of faith. For me, HPE is an utter bad boy and DELL is a decent crook ;) Supermicro, Fujitsu and Lenovo is like another 3 major monotheistic religions :rolleyes: I'm an atheist.

When PERC is booted, you actually see a string referring to Linux kernel version.

Any software RAID solution not being 0 and 1, will have performance impact.
 
  • Like
Reactions: 0xDEADBEEF
When PERC is booted, you actually see a string referring to Linux kernel version.
interesting, does that apply only to the newest 16 gen server model you've mentioned recently?
I don't remember seeing anything about Linux in older Dell servers (up to 14 gen) with PERC 330 or 730. I'll have a look at it some time, all of my PERC cards are in HBA mode currently.