Thanks, Anthony.

Each R740 as follows:

- 1 BOSS Boot Drive (Proxmox)
- 1 Samsung PM1725a NVMe SSD (used as a DB Disk for each OSD)
- 24 OSDs — SAS SSD 12/GBs
- 1 PERC H730P RAID card (in HBA mode)
- 25GB/s two port NICs (NDC, not PCIe)

I was considering replacing the PERC H740P RAID card with the HBA330 Mini 
Monolithic; but only if it would improve performance.

I have a Cisco Nexus9000 switches with capability to go to 40Gbe or 100Gbe. I 
was considering adding 40GBe NICs to each node in the cluster, to be used as a 
Private network for Ceph. Then doing some type of link aggregation with the two 
25Gbe ports on the NDC NIC. I do not know much about Link Aggregation/LCP, but 
I'm currently studying it, as it seems to be a necessity in a production 
environment.

Thoughts?


Regards,
[image]
Anthony Fecarotta
Founder & President
[image] [email protected] <mailto:[email protected]>
[image] 224-339-1182 [image] (855) 625-0300
[image] 1 Mid America Plz Flr 3 Oakbrook Terrace, IL 60181
[image] www.linehaul.ai <http://www.linehaul.ai/>
[image] <http://www.linehaul.ai/>
[image] <https://www.linkedin.com/in/anthony-fec/>

On Sun May 18, 2025, 08:09 PM GMT, Anthony D'Atri 
<mailto:[email protected]> wrote:
> My experience is that an IR HBA with FBWC and supercap can somewhat 
> accelerate latency with slow media. Wrapping each drive in a VD to enable WB 
> caching, though, is extra work and confounds drive metrics.
>
> It’s also been that cache modules / BBUs can be flaky, and these things 
> really need additional monitoring (hint: iDRAC isn’t enough)
>
> If you don’t already have optional cache / BBU aka CV, I wouldn’t spend the $ 
> retrofitting, especially on systems from 6-7 years ago. Put the $ toward 
> faster networking or NVMe-enabled systems. The R640 / R740 can be had in 
> all-NVMe chassis and is an inexpensive way to break out of the SAS/SATA trap.
>
> There are no fewer than 40 R740 chassis types, if you count risers. This 
> document
>
> https://dl.dell.com/manuals/common/dellemc-nvme-io-topologies-poweredge.pdf 
> <https://dl.dell.com/manuals/common/dellemc-nvme-io-topologies-poweredge.pdf>
>
> gives you a taste.
>
>
> Assuming that these are H330mini or similar, and all SAS/SATA, I personally 
> would set the HBA mode / personality to JBOD / passthrough / HBA and act like 
> the RoC isn’t there. Including boot drives if you don’t have BOSS. ymmv.
>
> Whatever you do, I advise using DSU to update firmware. Old firmware on these 
> LSI / PERC / Avago / Broadcom HBAs can present significant issues.
>
>> On May 18, 2025, at 8:11 AM, Anthony Fecarotta <[email protected]> wrote:
>>
>> Does running a RAID controller in HBA mode (not to be confused with IT mode) 
>> impact Ceph performance compared to using a dedicated HBA card? Is there any 
>> documentation or benchmarking data showing improved performance with true 
>> HBA hardware?
>>
>> For what it's worth my cluster is on Dell PowerEdge R740 machines.
>>
>> Thank you for your insights.
>>
>>
>> Regards,
>> [image]
>> Anthony Fecarotta
>> Founder & President
>> [image] [email protected] <mailto:[email protected]>
>> [image] 224-339-1182 [image] (855) 625-0300
>> [image] 1 Mid America Plz Flr 3 Oakbrook Terrace, IL 60181
>> [image] www.linehaul.ai <http://www.linehaul.ai/>
>> [image] <http://www.linehaul.ai/>
>> [image] <https://www.linkedin.com/in/anthony-fec/>
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to