Marvell and HPE Introduce NVMe RAID Adapter for Server Boot Drives
by Billy Tallis on October 6, 2020 8:30 AM EST- Posted in
- SSDs
- Storage
- Marvell
- RAID
- Enterprise
- Enterprise SSDs
- NVMe
- SK Hynix
- HPE
In 2018 Marvell announced the 88NR2241 Intelligent NVMe Switch: the first—and so far, only—NVMe hardware RAID controller of its kind. Now that chip has scored its first major (public) design win with Hewlett Packard Enterprise. The HPE NS204i-p is a new RAID adapter card for M.2 NVMe SSDs, intended to provide RAID-1 protection to a pair of 480GB boot drives in HPE ProLiant and Apollo systems.
The HPE NS204i-p is a half-height, half-length PCIe 3.0 x4 adapter card designed by Marvell for HPE. It features the 88NR2241 NVMe switch and two M.2 PCIe x4 slots that connect through the Marvell switch. This is not a typical PCIe switch as often seen providing fan-out of more PCIe lanes, but one that operates at a higher level and natively understands the NVMe protocol.
The NS204i-p adapter is configured specifically to provide RAID-1 (mirroring) of two SSDs, presenting them to the host system as a single NVMe device. This is the key advantage of the 88NR2241 over other NVMe RAID solutions: the host system doesn't need to know anything about the RAID array and continues to use the usual NVMe drivers. Competing NVMe RAID solutions in the market are either SAS/SATA/NVMe "tri-mode" RAID controllers that require NVMe drives to be accessed using proprietary SCSI interfaces, or are software RAID systems with the accompanying CPU overhead.
Based on the provided photos, it looks like HPE is equipping the NS204i-p with a pair of SK hynix NVMe SSDs. The spec sheet indicates these are from a read-oriented product tier, so the endurance rating should be 1 DWPD (somewhere around 876 TBW for 480GB drives).
This solution is claimed to offer several times the performance of SATA boot drive(s), and can achieve high availability of the OS and log storage without using up front hot-swap bays on a server. The HPE NS204i-p is now available for purchase from HPE, but pricing has not been publicly disclosed.
Related Reading
Source: Marvell, HPE
27 Comments
View All Comments
Spunjji - Wednesday, October 7, 2020 - link
Honestly, I got the impression that redundancy was the entire point - even so it should still be faster than solutions using SATA or some other form of hack.James5mith - Tuesday, October 6, 2020 - link
It's cute that you think a "read oriented" drive is 1DWPD. Realistically, they are probably 0.3DWPD max.schujj07 - Tuesday, October 6, 2020 - link
These are enterprise grade drives. They will work for the 1DWPD rating as it would be very expensive for companies to have to replace them too early.Kevin G - Tuesday, October 6, 2020 - link
I wonder if there will be a variant of this chip that supports six NVMe drives in RAID6 using a PCIe 16x link. The bandwidth would still be there for four drive reads and the ability for the controller to generate the two different sets of parity for the remaining two NVMe drives. SSD's are generally more reliable their mechanical counter parts but that just reduces the likelihood of a failure for an array, not making it a impossibility. As such, there will always be that niche market that wants the redundancy to ensure data integrity while increasing speed.MenhirMike - Tuesday, October 6, 2020 - link
Curious, how does the Host system know if there is a RAID failure? Is it exposed via S.M.A.R.T., or as a sensor, or using something proprietary?Desierz - Tuesday, October 6, 2020 - link
What happens if one of the drives becomes corrupt? Won't the corrupt data be mirrored onto the other drive?Dug - Tuesday, October 6, 2020 - link
Yup, but that's not what RAID is trying to prevent. That's what backups are for.Dug - Tuesday, October 6, 2020 - link
This does seem like a niche product. The host isn't generally used except for some reads and occasional updates. Reboot times are dependent on manufacturer bios, and not the drives, so I don't see a need for speed here. The space savings is questionable as most servers have OS drive space in the back already for 2.5", or even running on a pair of sd cards.I don't see any mention of hotswap, so if there is any downtime because you have to bring the entire server down to replace a drive, then it kind of defeats the purpose. If you had a single drive and it went down, it would take the same amount of time to replace it and restore from backup.
Spunjji - Wednesday, October 7, 2020 - link
Agreed about it being niche, for sure. Downtime to replace a drive doesn't really defeat the purpose though - it allows you to control *when* that downtime happens, as opposed to having the server go down randomly whenever the drive fails.foobardotcom - Friday, October 9, 2020 - link
These kind of cards are quite handy if you have for example 1U server chassis with 8x2.5" disk bays and no internal m.2 slots. These cards allow you to use pci-e slot for the OS disk and leave all 2.5" disks for data raid. This kind of non-transparent raid setup is easy when dealing with for example Debian UEFI installation because you don't have to deal with over complicated preseed configs and pull your hair out when trying to make it do another UEFI partition to provide somekind of redundancy. SD card based setups are more geared towards using it more or less like booting VMware ESX from the sd cards and not using sd cards for constant writing not even system logs.Usually these kind of cards have some server vendor provided management software that can communicate with controller to retrieve hardware information from those drives. Most probably MTBF of these cards is so high that total loss of other card is non-issue in big picture when compared with some more common read errors.