echo "stats" shows the following :
cmd_set 3,000,000,000
evicted_unfetched 2,800,000,000
evictions 2,900,000,000

This looks super abusive to me. What, is that 6% utilization of data in
cache?

On Sat, Aug 27, 2016 at 1:35 PM, dormando <[email protected]> wrote:

> You could comb through stats looking for things like evicted_unfetched,
> unbalanced slab classes, etc.
>
> 1.4.31 with `-o modern` can either make a huge improvement in memory
> efficiency or a marginal one. I'm unaware of it being worse.
>
> Just something to consider if cost is your concern.
>
> On Sat, 27 Aug 2016, Joseph Grasser wrote:
>
> > We are running 1.4.13 on wheezy.
> > In the environment I am looking at there is positive correlation between
> gets and puts. The ration is something like 10 Gets : 15 Puts. The eviction
> spikes are also occurring
> > at peak put times ( which kind of makes senses with the mem pressure ).
> I think the application is some kind of report generation tool - it's hard
> to say, my visibility into
> > the team stuff is pretty low right now as I am a new hire.
> >
> > On Sat, Aug 27, 2016 at 12:34 PM, dormando <[email protected]> wrote:
> >       What version are you on and what're your startup options, out of
> >       curiosity?
> >
> >       A lot of the more recent features can help with memory efficiency,
> for
> >       what it's worth.
> >
> >       On Sat, 27 Aug 2016, Joseph Grasser wrote:
> >
> >       >
> >       > No problem, I'm trying cut down on cost. We're currently using a
> dedicated model which works for us on a technical level but is expensive
> (within budget but still
> >       expensive).
> >       >
> >       > We are experiencing weird spikes in evictions but I think that
> is the result of developers abusing the service.
> >       >
> >       > Tbh I don't know what to make of the evictions yet. I'm gong to
> dig into it on Monday though.
> >       >
> >       >
> >       > On Aug 27, 2016 1:55 AM, "Ripduman Sohan" <
> [email protected]> wrote:
> >       >
> >       >             On Aug 27, 2016 1:46 AM, "dormando" <
> [email protected]> wrote:
> >       >                   >
> >       >                   > Thank you for the tips guys!
> >       >                   >
> >       >                   > The limiting factor for us is actually
> memory utilization. We are using the default configuration on sizable ec2
> nodes and pulling only
> >       >                   like 20k qps per node. Which is fine
> >       >                   > because we need to shard the key set over x
> servers to handle the mem req (30G) per server.
> >       >                   >
> >       >                   > I should have looked into that before
> posting.
> >       >                   >
> >       >                   > I am really curious about network saturation
> though. 200k gets at 1mb per get is a lot of traffic... how can you hit
> that mark without
> >       >                   saturation?
> >       >
> >       >                   Most people's keys are a lot smaller. In
> multiget tests with 40 byte keys
> >       >                   I can pull 20 million+ keys/sec out of the
> server. probably less than
> >       >                   10gbps at that rate too. Tends to cap between
> 600k and 800k/s if you need
> >       >                   to do a full roundtrip per key fetch. limited
> by the NIC. Lots of tuning
> >       >                   required to get around that.
> >       >
> >       >
> >       > I think (but may be wrong) the 200K TPS result is based on 1K
> values.  Dormando should be able to correct me.
> >       >
> >       > 20K TPS does seem a little low though.  If you're bound by
> memory set size have you thought of the cost/tradeoff benefits of using
> dedicated servers for your
> >       memcache?
> >       > I'm quite interested to find out more about what you're trying
> to optimise.  Is it minimising number of servers, maximising query rate,
> both, none, etc?
> >       >
> >       > Feel free to reach out directly if you can't share this
> publicly.
> >       >
> >       >
> >       > --
> >       >
> >       > ---
> >       > You received this message because you are subscribed to a topic
> in the Google Groups "memcached" group.
> >       > To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/memcached/la-0fH1UzyA/unsubscribe.
> >       > To unsubscribe from this group and all its topics, send an email
> to [email protected].
> >       > For more options, visit https://groups.google.com/d/optout.
> >       >
> >       > --
> >       >
> >       > ---
> > > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected].
> > > For more options, visit https://groups.google.com/d/optout.
> > >
> > >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to a topic in the
> Google Groups "memcached" group.
> > To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/memcached/la-0fH1UzyA/unsubscribe.
> > To unsubscribe from this group and all its topics, send an email to
> [email protected].
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected].
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "memcached" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/memcached/la-0fH1UzyA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to