On 27 August 2016 at 10:05, Joseph Grasser <[email protected]> wrote:

> No problem, I'm trying cut down on cost. We're currently using a dedicated
> model which works for us on a technical level but is expensive (within
> budget but still expensive).
>
> We are experiencing weird spikes in evictions but I think that is the
> result of developers abusing the service.
>
> Tbh I don't know what to make of the evictions yet. I'm gong to dig into
> it on Monday though.
>
>

So if it's cost I'd assume you want to minimise the number of dedicated
servers which means that you want to maximise the capacity (in memory) per
server _and_ the performance (throughput) per server.  I'd start by looking
at the distribution of request sizes and then working out, through a
combination of empirical measurement and theoretical analysis -- what is
the peak performance you can expect from a server and comparing it with
what you get now.  It'll quickly tell you if this is an exercise worth
pursuing.

If it turns out you're near optimum performance there are a bunch of
secondary tricks (caching flash, compression, alternate cache replacement
algorithms etc) that you can pursue.  These techniques are not on-topic for
this list but, again, reach out offline if you want pointers.

With respect to LRU eviction, stats should give you an idea.  I'd start by
looking at the put:get ratio and the LRU times of evicted entries to get an
idea of whether abuse is the case or whether it's a capacity issue.

Rip

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to