>> 
>> Does anyone deployed CEPH in supermicro super storage nodes of 60/90 HDD + 4 
>> NVME for WALL?

Be aware that offloading WAL+DB is not a panacea.  There are certain benefits, 
but it is not likely to improve throughput.

>> 
>> Newest models support 144cores in single socket and several TB of ram 
>> without issues.

Cores, or hyperthreads? I suspect this is the Epic 9825 with 144c / 288 
threads.  With a list price of USD 13,000.

>> 
>> But as far as we understood in the technical notes it use SAS Expanders for 
>> connect all disk, from 2 to 4 SAS Expanders to connect the whole chasis.

As I understand it, anything with more than 8 drives connecting to a single HBA 
is going to use expanders.  If you're using rotational media, expanders are the 
least of your concerns, honestly.

>> 
>> We’re looking for next configuration :
>> 
>> Intel Xeon 96cores @ 2.4

That doesn't agree with what you wrote about re 144 cores.  Assuming that here 
you mean 96c / 192t.

>> 
>> 1TB RAM
>> 
>> 2 SSD for OS.
>> 
>> 4 NVME x 30TB NVME KIOXIA

Be careful that these are TLC, not one of the QLC models.  QLC is not a good 
fit for this application.  It however would be a great alternative to HDDs, 
assuming that this cluster is for RGW service.

>> 
>> 90 HDD * 22 TB HGST

This gives you 2 threads / vcores per OSD, with a handful left over for mon, 
mgr, rgw, etc.  This is on the light side, especially since you likely would 
want at least two and perhaps even more RGW daemons per node as well.  Under 
expansion or component loss you may saturate CPU, leading to slow requests and 
all manner of unpleasantness.  

>> 
>> 4x25 Gbps or 2x100 Gbps
>> 
>> Main use RGW.

So it is.

As Mark wrote, there are issues with dense toploaders.

* If your RGW use-case includes a significant number of hot, small objects, you 
will want to place those on a replicated SSD pool, the HDDs will be hotspots.
* 60 or 90 drives will often bottleneck a single HBA.  If you do this, don't 
waste your money and karma on a RAID HBA.
* Dense toploaders are heavy, if you have a raised floor especially you may not 
be able to put more than a couple in each rack safely.
* Same for power, a rack that can only be populated 25% full erodes any 
perceived cost benefit
* If your cluster comprises a small number of these chassis, say <10, that's a 
very, very large blast radius.  When one of those nodes halts and catches fire, 
IF you have sufficient spare capacity to heal, that process will be a 
thundering herd that will degrade performance, and could easily take MONTH to 
complete. During which time you are at increased risk of data unavailability or 
loss.
* Dense systems are prone to network saturation

In most cases you would be better off with a larger number of more 
modestly-equipped servers, even if you stick with HDDs.

But for most RGW workloads I would really suggest more modestly equipped  1U 
servers with 1-2 TLC SSDs for index/meta/log/etc and the default storage class, 
and the balance of drive bays populated with large QLC SSDs for bulk data.

>> 
>> Regards
>> 
>>      
>> 
>> *MANUEL RIOS FERNANDEZ*
>> 
>> CEO –  EasyDataHost
>> 
>> *Phone: * 677677179
>> 
>> *Web: *_www.easydatahost.com <http://www.easydatahost.com/>_
>> 
>> *Email: *[email protected] <mailto:[email protected]>_
>> 
>> Título: LinkedIn - Descripción: image of LinkedIn icon 
>> <https://es.linkedin.com/in/manuel-rios-fernandez-14880949?original_referer=https%3A%2F%2Fwww.google.com%2F>
>> 
>> *_ADVERTENCIA LEGAL:_*
>> 
>> Este mensaje y, en su caso, los ficheros anexos son confidenciales, 
>> especialmente en lo que respecta a los datos personales, y se dirigen 
>> exclusivamente al destinatario referenciado.
>> 
>> Si usted no lo es y lo ha recibido por error o tiene conocimiento del mismo 
>> por cualquier motivo, le rogamos que nos lo comunique por este medio y 
>> proceda a destruirlo o borrarlo, y que en todo caso se abstenga de utilizar, 
>> reproducir, alterar, archivar o comunicar a terceros el presente mensaje y 
>> ficheros anexos, todo ello bajo pena de incurrir en responsabilidades 
>> legales. El emisor no garantiza la integridad, rapidez o seguridad del 
>> presente correo, ni se responsabiliza de posibles perjuicios derivados de la 
>> captura, incorporaciones de virus o cualesquiera otras manipulaciones 
>> efectuadas por terceros.
>> 
>> *_CONFIDENTIALITY NOTICE:_*
>> 
>> This e-mail message and all attachments transmitted with it may contain 
>> legally privileged, proprietary and/or confidential information intended 
>> solely for the use of the addressee. If you are not the intended recipient, 
>> you are hereby notified that any review, dissemination, distribution, 
>> duplication or other use of this message and/or its attachments is strictly 
>> prohibited. If you are not the intended recipient, please contact the sender 
>> by reply e-mail and destroy all copies of the original message and its 
>> attachments. Thank you.
>> 
>> *Descripción: Descripción: Descripción: Descripción: Descripción: Flor 
>> ecoTECHNo imprimas si no es necesario. Protejamos el Medio Ambiente.*
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list [email protected]
>> To unsubscribe send an email [email protected]
> 
> -- 
> Best Regards,
> Mark Nelson
> Head of Research and Development
> 
> Clyso GmbH
> p: +49 89 21552391 12 | a: Minnesota, USA
> w:https://clyso.com  | e:[email protected]
> 
> We are hiring:https://www.clyso.com/jobs/
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to