If the objects are created and discarded you might be able to spawn a new 
process to handle the request - or maybe a bunch - then just kill it and start 
a new process. Possibly use shared memory for constant data. 

This is the poor man’s generational garbage collector. 

> On Oct 29, 2023, at 9:43 PM, Zhihui Jiang <[email protected]> wrote:
> 
> Hi there,
> 
> We have a large-scale recommendation system serving millions of users which 
> is built using Golang. It has worked well until recently when we are trying 
> to enlarge our index or candidate pool by 10X in which case the number of 
> candidate objects created to serve each user request can also increase by 
> 5~10X. Those huge number of objects created on heap cause a big jump of the 
> CPU used for GC itself and thus significantly reduces the system throughput.
> 
> We have tried different ways to reduce GC cost, like using soft memory limit 
> and dynamically tuning the value of GOGC similar to what is described here. 
> Those indeed helped, but they won't reduce the intrinsic cost of GC because 
> the huge number of objects in heap have to be recycled anyway. 
> 
> I'm wondering if you have any suggestions about how to reduce object 
> allocations during request serving?
> 
> Thanks!
> Best
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/a6b8f58f-9452-43e6-9e63-92d944dd0caan%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/5741D093-86FC-41F0-BBF1-E4106C92ABC6%40ix.netcom.com.

Reply via email to