Yeah, 5 instances on different ports on each baremetal machines.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: [email protected]<mailto:[email protected]>
---------------------------------------------------

From: [email protected] <[email protected]>
Sent: Monday, September 13, 2021 2:24 PM
To: Szabo, Istvan (Agoda) <[email protected]>; Eugen Block <[email protected]>
Cc: ceph-users <[email protected]>
Subject: Re: RE: [ceph-users] Re: How many concurrent users can be supported by 
a single Rados gateway

Email received from the internet. If in doubt, don't click any link nor open 
any attachment !
________________________________
Dear Istvan,

Thanks a lot for sharing.  I have a question: How do you run 15 RGW on 3 nodes? 
using VM or container, or on physical machine. I am not sure whether it is good 
(if possible) to run multiple RGW directly on physical machine...

best regards,

Samuel



________________________________
[email protected]<mailto:[email protected]>

From: Szabo, Istvan (Agoda)<mailto:[email protected]>
Date: 2021-09-13 04:45
To: [email protected]<mailto:[email protected]>; Eugen 
Block<mailto:[email protected]>
CC: ceph-users<mailto:[email protected]>
Subject: RE: [ceph-users] Re: How many concurrent users can be supported by a 
single Rados gateway
Good topic, I'd be interested also. One of the redhat document says 1GW / 50 
OSD, but I think it is not a relevant formula. I had couple of time when the 
users doing something stupid and totally ddos down the hole cluster. What I've 
done added additional 4 rgw in each of the mon/mgr nodes where the gateway is 
running to sustain the super high load, so currently I'm using like 15 RGW 
behind a haproxy loadbalancer on 3 nodes.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: [email protected]<mailto:[email protected]>
---------------------------------------------------

-----Original Message-----
From: [email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>>
Sent: Saturday, September 11, 2021 1:51 PM
To: Eugen Block <[email protected]<mailto:[email protected]>>
Cc: ceph-users <[email protected]<mailto:[email protected]>>
Subject: [ceph-users] Re: How many concurrent users can be supported by a 
single Rados gateway

Email received from the internet. If in doubt, don't click any link nor open 
any attachment !
________________________________

Thanks for the suggestions.

My viewpoints may be wrong, but i think stability is utmost for us, and an 
older version such as Luminous may be much well battle-field tested that recent 
ones. Unless there is some instatbilty or bug reports, I would still trust 
older versions. Just my own preference on which version takes my turst

thanks a lot,

Samuel




[email protected]<mailto:[email protected]>

From: Eugen Block
Date: 2021-09-10 17:21
To: huxiaoyu
CC: ceph-users
Subject: Re: [ceph-users] How many concurrent users can be supported by a 
single Rados gateway The first suggestion is to not use Luminous since it’s 
already EOL. We noticed major improvements in performance when upgrading from L 
to Nautilus, and N will also be EOL soon. Since there are some reports about 
performance degradation when upgrading to Pacific I would recommend to use 
Octopus.


Zitat von [email protected]<mailto:[email protected]>:

> Dear Cephers,
>
> I am planning a Ceph Cluster (Lumninous 12.2.13) for hosting on-line
> courses for one university.  The data would mostly be video media and
> thus 4+2 EC coded object store together with CivetWeb RADOS gateway
> will be utilized.
>
> We plan to use 4 physical machines as Rados gateway solely, each with
> 2x Intel 6226R CPU and 256 GB memory, for serving 8000 students
> concurrently, of which each may incur 2x 2Mb/s video streams.
>
> Are these 4-machine Rados gateway a reasonable configuration for
> 8000 users, or an overkill, or insufficient?
>
> Suggestions and comments are highly appreciated,
>
> best regards,
>
> Samuel
>
>
>
> [email protected]<mailto:[email protected]>
> _______________________________________________
> ceph-users mailing list -- [email protected]<mailto:[email protected]> To 
> unsubscribe send an
> email to [email protected]<mailto:[email protected]>




_______________________________________________
ceph-users mailing list -- [email protected]<mailto:[email protected]> To 
unsubscribe send an email to 
[email protected]<mailto:[email protected]>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to